id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
3abf65721d6863e256b1931e87655c333e74e7e5
WELL-FORMED PETRI NET BASED PATTERNS FOR MODELING LOGIC CONTROLLERS FOR AUTONOMOUS TRAINS Yuchen Xie, Manel Khlif-Bouassida, Armand Toguyéni Centrale Lille, CRIStAL, UMR 9189 59650 Villeneuve d’Ascq, France Univ. Lille Nord de France, F-59650, Lille, France {yuchen.xie, manel.khlif-bouassida, armand.toguyeni}@centralelille.fr ABSTRACT The automation and the adoption of ERTMS (The European Rail Traffic Management System) are two solutions for railway systems to satisfy the necessity of increasing the capacity of railway lines and enhancing their safety. In this context, this study is to be part of the contribution to a methodology allowing the development of discrete event controllers of autonomous train control system needed for railway automation. This article emphasizes the modeling stage using Colored Petri Nets (CPN) and its extensions. While modeling, both the railway requirements and the necessity to formally verify some crucial properties (e.g., collision-free system) have been taken into account. By proposing several modeling patterns based on Well-Formed Petri Nets (WFN), we solve several technical problems of modeling railway train control system and similar complex systems, making it possible to construct reducible and analyzable models, before being formally verified. Keywords: Railway System, Autonomous Train Control, DES Modeling, Colored Petri Nets 1. INTRODUCTION The development of autonomous trains logic controllers has become a priority in railway control system, in order to increase its safety, and to make railway system more competitive with regard to the other means of transportation. This study is part of a methodology allowing the systematic and rigorous development of logic controllers necessary for railway automation. The methodology we finally develop should formally model the control functions and make it possible to verify essential properties (e.g., safety) of an automated system. The ultimate goal of this methodology is to generate, by model transformation, the code of these functions, which could be implemented on the computers of the ground infrastructure and on the embedded controllers in the trains. This paper concerns the modeling stage. One problematic situation is to deal with the compromise between the modeling power of the selected modeling approaches and the possibility of making formal verification. In this study, we decide to use Well-Formed Petri Nets (WFN) as the modeling formalism to benefit from all its advantages, and we propose several WFN modeling patterns and techniques suitable with the complexity of railway systems. The paper is structured as follows. In the second section, we give a state-of-the-art of railway system modeling using Colored Petri Net (CPN). In the third section, we discuss the tradeoff between the modeling power and the verification capacity of CPN and WFN approaches, in order to finally justify the reason why we choose WFN as our modeling tool for autonomous train control system. Section 4 presents a railway system structure and some main functions used in this paper; several constrains and assumptions in the modeling stage are also given in section 4. In section 5, we present a brief introduction about the main formalism and characteristics of WFN. In section 6, we propose certain modeling patterns as solutions to some main modeling problems of a railway control system. In section 7, we illustrate the use of these patterns by a case study of automation of railway system. Finally, we end up with conclusions and perspectives of this work in section 8. 2. STATE OF THE ART For about half a century, Petri Nets (PN) have been used to model concurrent and complex systems. Among its numerous extensions, CPN is the most widely used formalism incorporating data, hierarchy, and time (van der Aalst et al. 2013). This section summarizes some research works using CPN to model and analyze railway control system. In (Janczura 1999), a whole process of modeling and analyzing a railway network is proposed using CPN. The network considers two types of trains (i.e., express and normal) which move in the same direction. A safety property (each block in the railway line can only be occupied by exactly one train or empty) and four operational properties are analyzed. However, this thesis report only considers a quite simple model and does not respect the ERTMS/ETCS standard. In (Jansen et al. 1998; Meyer zu Hörste 1999), a CPN hierarchical framework is proposed to model ERTMS/ETCS (mainly in level-2). Several generic modeling paradigms and techniques (e.g., distributed modeling, communication between separate CPNs, synchronization, etc.) are created to build their CPN models and formal methods can be used to analyze these models. This study mainly focus on the hierarchical and structural problems of modeling ETCS specifications instead of the implementation of concrete functional models. A summary of Petri Nets models of railway stations is given in (Żarnay 2004). Different models are divided into four levels (i.e., technical equipment level, movement level, train processing level and decision-making level) according to their different objectives and different abstract levels. CPN Tools is a tool for editing, simulating and analyzing CPNs, which is first introduced in (Ratzer et al. 2003; Jensen et al. 2007). After its wide application, more CPN models are proposed by benefiting from the features of this powerful tool. In (van der Aalst et al. 2013), several CPN design patterns and strategies are proposed using CPN Tools, showing some solutions to several typical design problems in terms of modeling complex processes. In (Vanit-Anunchai 2009; Vanit-Anunchai 2010; Vanit-Anunchai 2014), railway interlocking table models are proposed using CPN Tools. As a main advantage of these models, the general CPN structure proposed can be reused regardless of variable structures and sizes of railway systems. While, these models store too many data (e.g., geographic information) in colored tokens and the behavior of the models are greatly affected by the data instead of the structure of model. In this case, although we can do some test with the simulation function in CPN Tools, a formal verification is rather difficult to performed on this kind of CPN models. In our previous work (Xie et al. 2016), we have proposed discrete controller models based on High-level Petri Nets supported by CPN Tools. Our research concentrates on the whole railway system (i.e., both the train controller and the trackside part). In this previous work, these High-level Petri Nets are unconstrained CPNs, allowing a powerful modeling ability of the systems. CPN Tools also offers extra extensions with ML Language to enhance its modeling power, e.g., the possibility of defining a “list” datatype. However, one pays for this capacity of modeling because there are no really, tools or efficient methods, to help analyze and verify these unconstrained CPNs with extensions. 3. COMPARISON OF CPN AND WFN APPROACHES OF MODELING TRAIN CONTROL SYSTEMS Figure 1 compares the CPN and WFN approaches of modeling the railway system and the possible analyzing methods applicable to the models. As shown in the left part of Figure 1, the most direct way of analyzing a CPN is to generate its reachability graph which enables to check the required properties. Although it is possible in theory, the application of this method is generally limited by combinatorial explosion problems. One way to combat this combinatorial explosion would be to reduce the initial CPN model before constructing the reachability graph. However, there is very little work proposing reduction rules applicable to CPNs and each of them has their own applicable constrains. For example, the reduction rules proposed in (Esparza & Hoffmann 2016) are only applicable to free choice workflow nets with an objective of maintaining the soundness property. In more general cases, the only solution available is to first go through an unfolding operation of the CPN before the application of some existing reduction rules of ordinary PNs (Berthelot & Lri-Ilie 1986; Murata 1989). However, in this case, for a complex system e.g. railway system, one is confronted with a combinatorial explosion problem in the unfolding operation. To solve the problems above, some High-level Petri Nets with constrains are proposed, among which the Well-Formed Petri Nets (Chiola et al. 1991) are of interest to us. It is proved that WFN have the same expression power as CPN (Diaz 2013), which means, every CPN can be transformed into a WFN with the same basic structure, same color domains (possibly partitioned in static subclasses), equivalent arc labeling, and the possible addition of transition predicates (Chiola et al. 1991). It implies that WFNs have at least an equal modeling power compared to general (unconstrained) CPNs defined in (Jensen 1981). ![Figure 1 Different Ways of Analysis of Colored Nets](image-url) The right part of Figure 1 describes the possible analysis methods applicable to WFN models. To avoid the combinatorial explosion, several reduction rules could first be applied, e.g. the reduction rules in (Haddad 1991). Several reduction rules may also need a calculation of colored invariants (Couvreur & Martínez 1991). Then the models can be analyzed with help of these invariants and/or by building a symbolic reachability graph (Chiola et al. 1991) which will greatly reduce the size of the reachability graph. As our final objective is to propose appropriate Petri Net patterns whose properties can be checked before the models are implemented, this paper proposes the use of WFN instead of CPN for modeling autonomous railway control systems, in order to benefit from the advantages of analyzing a WFN model. 4. RAILWAY SYSTEM BASIC AND CONTEXT This study concerns the management of multiple trains in a railway line based on Movement Authorities (MA, permission for a train to move to a specific location with supervision of its speed) generated by trackside infrastructure. We first present the background of the railway models. 4.1. Railway Lines and Blocks Railway lines are connections of different railway stations. Normally, a single railway line has a fixed direction and all the trains in this railway line run in this direction. A railway line is divided into numerous blocks. Blocks are used to avoid train collisions, ensuring the safe and efficient operations of railway systems. ![Figure 2 Railway Lines and Blocks](image) Figure 2 shows an example of lines decomposed into blocks. The railway line from Station 1 to Station 2 and another railway line from Station 2 to Station 1 are divided into several blocks respectively (for simplicity, Figure 2 represents each railway line with 5 blocks). For safety reasons, each block must contain no more than one train. Thus, only after the train occupying the current block (the block is said to be “occupied”) has left (the block is said to be “clear”), another train is authorized to enter this block. 4.2. ETCS-2 Based Train Management in Railway Lines European railway systems are nowadays equipped with the ERTMS and the European Train Control System (ETCS). ETCS is specified in four different levels (level 0-3). Currently, ERTMS/ETCS level 2 (ETCS-2) has been put into use on several high-speed railway lines in Europe, which uses Eurobalise to help train locating and uses continuous radio transmission GSM-R (Global System for Mobile Communications - Railway) for data exchanges between trackside infrastructures and onboard equipment. Our study is based on the infrastructure of ETCS-2. Figure 3 illustrates the main functions of train management offered by ETCS-2. ![Figure 3 ETCS-2 Based Train Management](image) **Trackside:** Radio Block Center (RBC) provides trains with Movement Authorities (MA), taking into account the positions of corresponding trains, signals and switch states as well as the physical line configuration (slopes, curves, etc.); **Onboard:** Each train regularly sends its position to RBC and receives MA from RBC. The onboard equipment calculates a speed profile considering the End of Authority (EOA), which is the last block in the MA, and the train characteristics (mass, length, etc.). 4.3. System Simplifications and Assumptions This study considers a set of simplifying assumptions to manage multiple trains in a railway line. The aim of these simplifications is to reduce the complexity of the models so that the models could be represented in a limited number of pages. The principle assumptions are: 1. Our model does not take into account the length of a train. We only care whether a train occupies a block. 2. The MA message is reduced to the list of blocks that are reserved and assigned to a train. The exact speed limit and the other parameters in a MA are not considered here. However, we assume that a train can always stop at its EOA. 3. A single RBC manages all the trains in the same railway line between two stations. This means that the RBC handover function is beyond this study. The control of railway node/station is not considered. 4. In this paper, a “railway line” has a fixed operation direction and is linked with only 2 stations: the departure and the arrival. The overtaking is not considered. This assumption simplifies the operations we propose later in the paper by always maintaining the same order of the trains as they enter in this railway line. 5. Each time a train enters in a new block, we assume that it receives its current position from a Eurobalise and then sends a position report to the corresponding RBC, instead of considering the specified report format according to ERTMS/ETCS-2 standard. 6. Once a RBC receives a position report from a train, it updates the train’s location in its database. The RBC also considers the location report as a MA request. Consequently, it generates a MA response to the train according to the following principle: if the train is the first one in the railway line (there is no preceding train until the end block), its EOA is set to the end block in this railway line, otherwise the EOA is set to the block next to the one occupied by the preceding train. 5. WELL-FORMED PETRI NETS 5.1. Well-Formed Petri Net and its formalism Well-Formed Petri Nets (WFN) are Colored Petri Nets (Jensen 1981) that satisfy a set of syntactical constraints. In this paper, we only introduce the main features of WFN. A complete formal definition can be found in (Chiola et al. 1991). 5.1.1. WFN Color Classes and Color Domains A color class can be ordered or unordered, and can be divided into static subclasses. A color class defines the same nature of the tokens of this type. When a color class comprise of several static subclasses, the colors within each static subclass share some similar potential behaviors (batch operation, symmetry, etc.). A color domain is a Cartesian product of color classes. A neutral color is noted as $\varepsilon$, allowing to define uncolored places or transitions. Each place and each transition of a WFN is associated with a color class or with a color domain. 5.1.2. WFN Color Functions Color functions are formal sums of guarded functions built by standard operations (linear combination, composition, etc.) on basic functions. There are three basic functions: identity function is a projection which selects an item of a tuple and is always denoted by a typed variable (e.g. X, Y) in application; diffusion function is a constant function which returns the bag composed by all the colors of a class or a subclass and is denoted $\text{All}(C)$ where C is the corresponding (sub)class; successor function applies on an ordered class and returns the color following the given color, which is denoted as $\Theta$. 5.1.3. Guards Color functions are formal sums of guarded functions built by standard operations (linear combination, composition, etc.) on basic functions. An atomic predicate can identify two variables ($\{X = Y\}$), compare a variable with another using successor function ($\{X = \bigoplus Y\}$), or restrict a variable to be within a static subclass $\{X \in D\}$. The constrains above provide WFN with a good structure and simplify its analysis. The formalism of basic functions emphasizes the system symmetries. However, some asymmetric behaviors of objects in a given class are also supported by subclass divisions or by guards on transitions or on color functions, which has strengthened the modeling power of WFN. 5.2. WFN Modeling Tools CPN-AMI (Kordon & Paviot-Adet 1999) allows users to build and analyze models of AMI-Nets, which are WFNs with a specific syntax. GreatSPN (Chiola et al. 1995) is a friendly framework allowing the modeling, validation, and performance evaluation of Generalized Stochastic Petri Nets (GSPN) and their colored extension: Stochastic Well-Formed Nets (SWN). This tool also supports timed Petri Net based modeling and implements several efficient analysis algorithms to facilitate complex applications. Besides these tools supporting WFN, one could also choose from a variety of tools for Colored Petri Nets and High-level Petri Nets to build their WFN models with respect to the WFN definition. 6. WFN MODELING PATTERNS FOR TRAIN CONTROL SYSTEM In this section, we propose three modeling patterns that could be useful to build WFN models for railway control systems. 1. An equivalent structure in WFN to the arcs using IF-THEN-ELSE expressions defined in CPN Tool (Jensen et al. 2007); 2. The definition and implementation of a successor function; 3. A WFN queue structure with its corresponding management operations (adding item, removing item, modifying item, query, etc.). We will define these modeling patterns with respect to a practical railway train control model. While these modeling patterns can also be applied to other complex system models. 6.1. IF-THEN-ELSE Arc in WFN IF-THEN-ELSE is a common alternative structure that facilitates the modeling of some system logic functions. An arc using IF-THEN-ELSE expression is supported by some tools e.g. CPN Tools. Unfortunately, it is not supported in WFN. In this section, we propose two solutions to use IF-THEN-ELSE arc based on guarded functions and guarded transitions respectively. 6.1.1. IF-THEN-ELSE Arc by Guarded Functions Consider a transition t and a place p. Let $C(t) = C$ and $C(p) = C_1 \times C_2 \times \cdots \times C_k$. We define $F_t$ and $F_p$ two unguarded colored functions, which are sums of tuple of basic functions. $$F_t = \sum_f (f_1^1, f_2^1, \ldots, f_k^1).$$ $$F_p = \sum_f (f_1^1, f_2^1, \ldots, f_k^1).$$ We define a general IF-THEN-ELSE expression which labels an arc connecting a transition t and a place p: $$W^*(p, t) = if \ g \ then \ F_t \ else \ F_p, \ where \ * \in \{+, -\}.$$ As such an expression $W^*(p, t)$ is not supported by WFN syntax, we define the equivalent function $W_E^*(p, t)$: $$W_E^*(p, t) = \begin{cases} [g] F_t + [-g] F_{t'} & \text{if } g \text{ then } F_t \text{ else } F_{t'} \\ \sum_{i=1}^m [g] \{ f_1, f_2, \ldots, f_i \} + [-g] \sum_{i=1}^m \{ f_1, f_2, \ldots, f_i \} & \text{else} \end{cases}$$ where $* \in \{+, -, \}$. Obviously, $W_E(p, t)$ respects the definition of WFN standard functions (Chiola et al. 1991) and has the same semantic as the IF-THEN-ELSE expression $W^*(p, t)$. ### 6.1.2. IF-THEN-ELSE Arc by Guarded Transitions Some Petri Nets tools do not support the concept of guarded function. In this case, we can use two guarded transitions to model the "then" and "else" clause of the IF-THEN-ELSE arc respectively. Figure 4 shows an example of an IF-THEN-ELSE arc and its context. G is the guard of transition t (it is possible that G=TRUE) and g is the condition in the IF-THEN-ELSE expression. The other notations will be the same as defined in section 6.1.1. ![Figure 4 IF-THEN-ELSE Arc by Guarded Transitions](image) We propose an equivalent structure taking into consideration three cases based on the relationship between G and g. **Case 1:** G is stronger than g ($G \subset g$) In this case, $\{ c \in C \mid G(c) = TRUE \} \subset \{ c \in C \mid g(c) = TRUE \}$. According to the firing principles, transition t is not enabled with a color $c \in \{ c \in C \mid g(c) = FALSE \}$. Consequently, only the function $F_t$ in the THEN-clause should be considered. Thus, the incidence function $W^*(p, t) = \begin{cases} if \ g \ then \ F_t \ else \ F_{t'} \end{cases}$ is then rewritten as $W^*(p, t) = W^*(p, t)$, where $* \in \{+, -, \}$. **Case 2:** G and g are disjoint ($G \cap g = \emptyset$) In this case, $\{ c \in C \mid G(c) = TRUE \} \cap \{ c \in C \mid g(c) = TRUE \} = \emptyset$. In opposition to Case 1, transition t is not fireable with any color $c \in \{ c \in C \mid g(c) = TRUE \}$, so only the function $F_{t'}$ in the ELSE-clause should be considered. Thus, the incidence function $W^*(p, t) = \begin{cases} if \ g \ then \ F_t \ else \ F_{t'} \end{cases}$ is then rewritten as $W^*(p, t) = W^*(p, t)$, where $* \in \{+, -, \}$. **Case 3:** general case not belonging to Case 1 nor Case 2 In this case, we partition the colorset satisfying the guard G (i.e. $C_0 = \{ c \in C \mid G(c) = TRUE \}$) into two sub-colorsets $C_0^G$ and $C_0^\neg G$: - $C_0^G = \{ c \in C \mid G(c) = TRUE \}$ - $C_0^\neg G = \{ c \in C \mid G(c) = TRUE \}$ Then one models transition t with two transitions $t^\prime$ and $t^\prime\prime$ defining $W^*(p, t) = (P', T', C', W'^{\prime}, W'^{\prime\prime}, \Phi', M_0')$, where - $P' = P$; $C' = C$; $M_0' = M_0$ - $T' = \{ T / t \}$ - $C_{t'} = C_{t'}$ - Assume $W^*(p, t)$ = if $g$ then $F_t$ else $F_{t'}$ and $* \in \{+, -, \}$, then $W'^{\prime}(p, t^\prime) = F_t$ $$W'^{\prime}(p, t^\prime) = W'^{\prime\prime}(p, t^\prime) = W'^*(p, t)$$ - $\forall p' \not\in P'$, $*$ \in \{+, -, \}$, $W'^{(\prime)}(p', t^\prime) = W'^{(\prime\prime)}(p', t^\prime)$ - $\forall t' \in \{ T / \{t^\prime, t^\prime\} \}$, $\forall p' \in P'$, $W'^{\prime\prime}(p', t^\prime) = W'^{\prime}(p', t^\prime)$ $$W'^{\prime\prime}(p', t^\prime) = G\ and \ g', \ \emptyset(t^\prime) = G \ and \ (\neg g)$$ We will give an example later in section 6.3 when introducing the operation 3 for a train queue structure. ### 6.2. Predecessor Function and Its WFN Realization In WFNs the successor function $\oplus X_I$ is defined as an elementary function. While in some modeling cases it is also necessary to use a predecessor function, which is not defined in WFNs. This study proposes a method to use predecessor functions that will be noted as $\ominus X_I$. This study also gives its application constrains. With respect to these constrains one could always find an equivalent WFN structure which behaves as a predecessor function. Let $\ominus X_I(c)$ be an application from $c \in C = C_1 \times \cdots \times C_k$ to the predecessor of $c_I$ in $C_i$, where $C_i$ is an ordered class. It is worth noting that like the successor function, the predecessor of the first item in $C_i$ is the last item. To benefit from the features of WFN, when analyzing such a colored net using predecessor functions defined above, we could transform it to an equivalent WFN. Figure 5 shows an example of a colored net using predecessor function (Figure 5 (a)) and its equivalent WFN (Figure 5 (b)). In the example $C(t) = C \times C \times C$, $X, Y$ are two identity functions that $X = X_1, Y = X_2$; $(X-1)$ and $(X+1)$ are notations of predecessor and successor functions that $(X-1) = \ominus X_1, (X+1) = \oplus X_1$. Figure 5 (a) uses the predecessor function $(X-1)$ in the output arc of transition t. In order to replace this structure using WFN, we do the following two steps: ![Figure 5 Predecessor and the Equivalent WFN](image) Step 1: search for all the instances of the identification function X in the “context of transition t”, and replace them with the corresponding successor function (X+1). It is worth noting that the three atomic predicates defined in WFN are replaced by the following rules respectively: 1. \([X = Y]\) is replaced by \([Y = \bigodot X]\); 2. \([X' = \bigodot Y]\) is replaced by \([\bigodot X = \bigodot Y]\), which means \([X = Y]\); 3. \([X \in D]\) is replaced by \([\bigodot X \in D]\), which is not a WFN guard. In this case, let \(D = \{x_m, \ldots, x_n\}\) be a subclass, we define a new subclass \(D' = \{x_{m-1}, \ldots, x_{n-1}\}\) where \(x_{m-1}\) and \(x_{n-1}\) are the predecessors of \(x_m\) and \(x_n\), respectively. Then \([\bigodot X \in D]\) is transformed to \([X \in D']\). In the example, the two instances are found in the guard of transition t and on the output arc from the transition t respectively in Figure 5 (a), which are then replaced by (X+1) in Figure 5 (b). Step 2: replace the predecessor function (X-1) (in Figure 5 (a)) with the corresponding successor function X (in Figure 5 (b)). In the example, the one on the output arc of transition t is replaced by t'. Application constrains: In order that the replacement above can be performed, for a color instance \(X_i\), if the predecessor function \(\bigotimes X_i\) is used, the corresponding successor function \(\bigotimes X_i\) cannot appear in the “context of the same transition t”, which includes the arcs connected with transition t and the guard of transition t. In other words, we cannot use the predecessor and the successor function of a same color instance \(X_i\) simultaneously and in the “context of a transition”. 6.3. Queue Structure in WFN While modeling railway control systems, more exactly the RBC model needs to have a centralized storage of the trains’ queue, i.e., the information of all the trains in the railway line it manages. The information includes at least the trains’ identifications, their positions and the sequence of these trains. Using WFN, we can use a token of a product domain (e.g., \(<\text{TrainID \times Position} >\)) to illustrate the identification and position of each train. However, it is difficult to establish an ordered relation among these tokens. In some software for modeling high level Petri nets such as CPN Tools (Ratzer et al. 2003; Jensen et al. 2007), it is possible to use a “list” type, like that defined in most programming languages, to realize this queue of trains. While the use of “list” type color class will lose the convenience of analyzing a WFN (a colored net using “list” type is obviously not a WFN). This section defines a queue structure in WFN. It establishes an order relation among different elements and supports several operations e.g. insert, removal, query, and update. In addition, a colored net using this WFN-compatible queue structure remains a WFN, maintaining all its advantages for its analysis. The proposition of this queue structure is faced with the requirements of modeling a practical train control system. Its application will be illustrated with the implementation of the Movement Authority (MA) function as part of the RBC model in Section 7. The implementation of the queue structure (e.g. Operation 3) uses the modeling patterns proposed in section 6.1 and 6.2. Some basic declarations used in the queue structure are defined as follows: \[ \begin{align*} \text{CLASS} & \quad \text{POS} = \{<0> \cup <1, 2, \ldots, N> \cup <N+1>\}; \\ \text{TID} & \quad = \{T(0), T(1), T(2), \ldots, T(M)\}; \\ \text{DOMAIN} & \quad \text{TRAINITEM} = \{\text{POS}, \text{TID}, \text{POS}\}. \end{align*} \] The color class POS is ordered and is divided into three sub-classes. Each position in \(<1, 2, \ldots, N>\) represents a particular block in the railway line (which has N blocks; N is consequently a parameter that is bound to a specific value for each real line). The other two sub-classes \(<0>\) and \(<N+1>\) are for special purposes and will be explained in the following paragraphs. For convenience, we define a constant \(\text{HEAD} = N+1\) for the following parts of this paper. The color class TID enumerates the different identifiers of trains, in which T(0) is reserved as a special value and it does not represent a real train. TID could be an unordered class. The color domain TRAINITEM is a 3-tuples Cartesian product and has the following practical meaning, as shown in Figure 6. ![Figure 6 Structure of TrainItem](image) Each token (except the token \(\text{TrainQueueRear}\)) of color domain \(\text{TRAINITEM}\) represents a particular train (\(\text{TrainID}\)) with its current position (\(\text{Current Block}\)). In addition, each train is connected to its previous train by indicating the block where its predecessor is located (\(\text{PrevTrain’s block}\)). The following two special values help to construct the queue structure: **First Train:** The first train in the railway line has not a preceding train regarding the actual state of the line. Let us give a special value “HEAD: POS” to its third field. As defined above, the constant \(\text{HEAD} = N+1\). The block “N+1: POS” does not exist in the railway infrastructure. This value is used to indicate the first train in the queue. **Train Queue Rear:** It doesn’t present a real train, but offers a link to the rear train’s position. The first and second fields of this item are always “0: POS” and “T0: TID”, which is used to identify this rear item. It is worth noting that the block “0:POS” is not a real block, neither T(0) a real train. Its third field indicates the position of the rear train in the queue. Proc. of the Int. Conf. on Integrated Modeling and Analysis in Applied Control and Automation, 2017 The queue structure is then constructed by two places, i.e., place \( \text{TrainQueue: TRAINITEM} \) and place \( \text{FreeBlock: POS} \). In place \( \text{TrainQueue} \) there are tokens of color domain “TRAINITEM”. In a special case where there are no trains in the railway line, the place \( \text{TrainQueue} \) is not empty, there still exists a token \( (<0: \text{POS}, T(0): \text{TID}, \text{HEAD: POS})> \). Tokens in Place \( \text{FreeBlock} \) represents the free blocks that are not occupied by a train. Each time a train moves, it will take the new position token from Place \( \text{FreeBlock} \) and release the token of its previous position. To illustrate how to model a practical queue structure of trains in WFN, here is a general case assuming that there are 3 trains in the railway line, as shown in Figure 7. Now it is necessary to define some basic operations to manage the queue structure. **Operation 1: Insert Operation** A new train is always inserted from the rear of the queue and it is normally inserted in Block 1. Then the objective of this operation is to insert a new token with \(<\text{TrainID} = \text{tr: TID}, \text{CurrentBlock} = 1: \text{POS}>\) to the queue and to modify the concerned tokens. This operation is explained with Figure 8, where \( \text{tr} \) is the identifier of the train to insert, and \( p_{\text{last}} \) is the position of the last train before this inserting operation. **Operation 2: Removal Operation** When a train arrives at the end block (Block N) of the railway line and then leaves this railway line, its token \(<\text{N:POS}, \text{tr:TID}, \text{HEAD:POS}>\) must be removed from place \( \text{TrainQueue} \) and the token of the block \(<\text{N:POS}>\) must be released to place \( \text{FreeBlock} \). Figure 9 shows that the two tokens representing the first train \(<\text{N, tr, HEAD}>\) and its successor train \(<p1, t1, N>\) are involved. The last train token is removed and the “PrevTrain” field of train “t1” is updated to “HEAD: POS” as it becomes the first train in the railway line. **Operation 3: Request of Movement Authority (MA) for a train.** In order to avoid the collision of trains, each train must request the RBC for MA. By receiving the MA requested, the train knows to which block it can advance safely without any collision risk. In practice, it can advance until the anterior block to the current position of its predecessor train. This authorized position is called the End of Movement Authority (EOA). Normally the train needs to request a new MA regularly before reaching its EOA, in order not to be stopped during its advancement. - When the considered train is the first train in the railway line, it can advance until the last block of this railway line, so its EOA is position N; - When the considered train is not the first train on the railway line, and its predecessor train is currently in block “p_pre”, then its EOA should be the block “p_pre - 1”. Therefore, the EOA position for a particular train with “tid=tr” could be expressed as “IF (p_pre = HEAD) THEN \( N \) ELSE (p_pre - 1)”, which contains an “IF- THEN-ELSE” arc and a predecessor function. Such an arc expression is not supported by WFN. While, after the application of equivalent structures proposed in section 6.1.2 and 6.2 (considering the definition HEAD = N+1), the structure of this operation using WFN can be given in Figure 10. Figure 10 Request of Movement Authority **Operation 4: Update of Train Position** The update of train position does not affect the order relation of trains in the queue. Transition Update1 replaces the train’s position with a new value while transition Update2 deals with its successor train. Figure 11 illustrates the WFN implementation of this operation. When the position value is updated, the previous position token “p0” is released to place FreeBlocks and the new position value is taken. We use two guarded functions with the guard [p<>p0] to avoid the manipulation to place FreeBlocks when the new value equals the old one. The update operation is always triggered when RBC receives a position report <tr: TID, p: POS> from a train. After the update in its database, it is necessary to send back an acknowledgement to the train. This example model considers two train modules, whose detail is explained in section 7.2. A RBC module is built for the railway line management; and its details will be given in section 7.3. **7. CASE STUDY** Faced with the practical problem of railway system controllers design, we have built several control models. Three models will be explained in this section. They offer the functions of managing multiple trains in a railway line, and with respect to WFN definitions. **7.1. System Structural Model** Figure 12 shows the model of the system architecture. The models are built in a modular way. The rectangles with double-line borders are modules. This example model considers two train modules, whose detail is explained in section 7.2. A RBC module is built for the railway line management; and its details will be given in section 7.3. Place Train2RBC and place RBC2Train represent the wireless interfaces between trains and the RBC module. The tokens in them are messages between different modules. In a similar way, place Balise2T models the Eurobalise interfaces. In our study, the Eurobalises are used to inform the trains of their locations. As all the train modules are exactly identical (except their TIDs as initial markings), it is possible to add more train modules to the architectural model, as long as these train modules are connected to the suitable interfaces and assigned with a TrainID. **7.2. Train Model** Figure 13 gives the train model integrated with the functions to enter a railway line, to advance with respect to its MA, and finally to pass this railway line. Let us suppose that initially the train just arrives on the first block (place Balise: 1, place Position is empty, place EOA: 1, place Registered: false). From this initial state, the following functions describes the behavior of the train sequentially. **Register function** is to register the train itself to the RBC. To fire the transition Register, the train must be located in block 1 (token “1: POS” in place Balise) and the state Registered is false. When transition Register fires, it will send a message of type “INSERT” to the RBC, put a token “1: POS” into place Position, and change the state Registered to true. Advance function is to simulate the advancement of a train already in the blocks. The train can advance as well as it does not arrive on its EOA position. Each time the train passes a block, its new position is received via place Balise so that transition Advance is fired. The token in place Position is updated, and a position report is also sent to inform the RBC of its new position. Transition Receive can be fired when there is a MA message generated by RBC. Place RBC2T are shared by all the trains, so only the message for this train (tid) will be received. After receiving the message, its new EOA value is then memorized in place EOA. Disconnect function is to inform the RBC that it has passed the railway line. After passing the last block (block N), the transition Disconnect can be fired. Then the token “N: POS” is removed from place Position, the train sends a message of type “REMOVE” to the RBC and changes its state Registered to false. 7.3. RBC Model Figure 14 represents the RBC model. The four main functions (e.g., InsertTrain, RemoveTrain, QueryEOA and PositionUpdate) are well explained as the four operations of train queue structure in Section 6.3. What we need to add in this model is the way to fire different functions. The functions InsertTrain, RemoveTrain and PositionUpdate can be fired after receiving a message from a train. A field “Type” (i.e., INSERT, REMOVE or UPDATE) in the message helps to choose the corresponding functions to fire. For convenience, the RBC model regards a position report as a MA request. So, each time it receives a position report, the trainID is then put into place Request in order to generate a MA for it. The RBC also generates a MA for a train that is just registered (after transition InsertTrain is fired). 8. CONCLUSIONS AND PERSPECTIVES In this paper, we have shown that it is possible to use WFNs to model complex systems such as railway systems by using several modeling patterns and techniques that we propose. These modeling patterns also make it possible to model some structures and extensions of other types of colored Petri nets such as the CPNs defined in (Jensen, 81), based on the WFNS (e.g., elementary colored functions). We illustrate our propositions by applying them to the Movement Authority (MA) function modeled in the ECTS-2 context. The prospects for this work are of course to continue the modeling of other functions of a railway system with a view to its complete automation. We are thinking in particular about the routing function of a train inside a node, which is implemented today in a semi-automatic mode, which requires a man-machine cooperation. Beyond the modeling stage, it will be necessary to complete this work by the development of a method allowing the formal verification of our models while controlling their combinatory explosion. On the other hand, we want to use the techniques of reductions applicable to the WFNS for our developed models. We also plan to directly use the calculation of colored invariants on reduced models but also the construction if necessary of symbolic reachability graph. All these mentioned methods may be complemented by the proposal of a formal model verification such as the assume-guarantee reasoning (Nguyen Huu 2013) in order to ensure that the global model inherits the properties verified on its component modules. ACKNOWLEDGMENTS This study is carried out within the framework of the CompRAIL project of ELSAT2020. The ELSAT2020 research program is co-financed by the European Union with the European Regional Development Fund, the French state and the Hauts de France Region Council. REFERENCES
{"Source-Url": "http://www.msc-les.org/proceedings/imaaca/2017/IMAACA2017_25.pdf", "len_cl100k_base": 9671, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 36043, "total-output-tokens": 11739, "length": "2e13", "weborganizer": {"__label__adult": 0.0014162063598632812, "__label__art_design": 0.0013017654418945312, "__label__crime_law": 0.0012722015380859375, "__label__education_jobs": 0.00415802001953125, "__label__entertainment": 0.0004239082336425781, "__label__fashion_beauty": 0.00045418739318847656, "__label__finance_business": 0.0009860992431640625, "__label__food_dining": 0.0011730194091796875, "__label__games": 0.005062103271484375, "__label__hardware": 0.00528717041015625, "__label__health": 0.0014581680297851562, "__label__history": 0.0014638900756835938, "__label__home_hobbies": 0.0004668235778808594, "__label__industrial": 0.00548553466796875, "__label__literature": 0.00113677978515625, "__label__politics": 0.001064300537109375, "__label__religion": 0.0011539459228515625, "__label__science_tech": 0.409423828125, "__label__social_life": 0.0003230571746826172, "__label__software": 0.01678466796875, "__label__software_dev": 0.4306640625, "__label__sports_fitness": 0.0013551712036132812, "__label__transportation": 0.1068115234375, "__label__travel": 0.0008878707885742188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44090, 0.02715]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44090, 0.62623]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44090, 0.89664]], "google_gemma-3-12b-it_contains_pii": [[0, 4609, false], [4609, 9013, null], [9013, 13772, null], [13772, 18883, null], [18883, 23843, null], [23843, 29712, null], [29712, 32858, null], [32858, 36032, null], [36032, 39864, null], [39864, 44090, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4609, true], [4609, 9013, null], [9013, 13772, null], [13772, 18883, null], [18883, 23843, null], [23843, 29712, null], [29712, 32858, null], [32858, 36032, null], [36032, 39864, null], [39864, 44090, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44090, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44090, null]], "pdf_page_numbers": [[0, 4609, 1], [4609, 9013, 2], [9013, 13772, 3], [13772, 18883, 4], [18883, 23843, 5], [23843, 29712, 6], [29712, 32858, 7], [32858, 36032, 8], [36032, 39864, 9], [39864, 44090, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44090, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
90512629913daad9bdcf630e6241df7e50dcffe7
Certified Connection Tableaux Proofs for HOL Light and TPTP Cezary Kaliszyk University of Innsbruck cezary.kaliszyk@uibk.ac.at Josef Urban Radboud University Nijmegen josef.urban@gmail.com Jiří Vyskočil Czech Technical University in Prague jiri.vyskocil@gmail.com Abstract In the recent years, the Metis prover based on ordered paramodulation and model elimination has replaced the earlier built-in methods for general-purpose proof automation in HOL 4 and Isabelle/HOL. In the annual CASC competition, the leanCoP system based on connection tableaux has however performed better than Metis. In this paper we show how the leanCoP’s core algorithm can be implemented inside HOL Light. leanCoP’s flagship feature, namely its minimalistic core, results in a very simple proof system. This plays a crucial role in extending the MESON proof reconstruction mechanism to connection tableaux proofs, providing an implementation of leanCoP that certifies its proofs. We discuss the differences between our direct implementation using an explicit Prolog stack, to the continuation passing implementation of Metis. The resulting prover can be also used as a general purpose TPTP prover. We compare its performance against the Metis goals. The resulting prover can be also used as a general purpose TPTP prover. We compare its performance against the resolution based Metis on TPTP and other interesting datasets. Categories and Subject Descriptors CR-number [subcategory]: third-level Keywords keyword1, keyword2 1. Introduction and Related Work The leanCoP \cite{19} automated theorem prover (ATP) has an unusually good ratio of performance to its implementation size. While its core algorithm fits on some twenty lines of Prolog, starting with CASC-21 \cite{27} it has regularly beaten Otter \cite{17} and Metis \cite{3} in the FOF division of the CASC ATP competitions. In 2014, leanCoP solved 158 FOF problems in CASC-J7\cite{8} while Prover9 solved 95 problems. On the large-theory (chainy) division of the MPTP Challenge benchmark\cite{22}, leanCoP’s goal-directed calculus beats also SPASS 2.2 \cite{31}, and its further AI-style strengthening by integrating into leanCoP’s simple core learning-based guidance trained on such larger ITP corpora is an interesting possibility \cite{34}. Compact ATP calculi such as leanTAP \cite{2} and MESON \cite{16} have been used for some time in Isabelle \cite{20, 21} and HOLs \cite{4} as general first-order automation tactics for discharging goals that are already simple enough. With the arrival of large-theory “hammer” linkups \cite{8, 11, 13, 22} between ITPs, state-of-the-art ATPs such as Vampire \cite{12} and E \cite{24}, and premise selection methods \cite{3}, such tactics also became used as a relatively cheap method for reconstructing the (minimized) proofs found by the stronger ATPs. In particular, Hurd’s Metis has been adopted as the main proof reconstruction tool used by Isabelle’s Sledgehammer linkup \cite{3}, while Harrison’s version of MESON could reconstruct in 1 second about 80% of the minimized proofs found by E in the first experiments with the HOL/Hammer linkup \cite{4}. Since HOL Light already contains a lot of the necessary infrastructure for Prolog-style proof search and its reconstruction, integrating leanCoP into HOL Light in a similar way as MESON should not be too costly, while it could lead to interesting strengthening of HOL Light’s first-order automation and proof reconstruction methods. In this paper we describe how this was done, resulting in an OCaml implementation of leanCoP and a general leanCoP first-order tactic in HOL Light. We compare their performance with MESON, Metis and the Prolog version of leanCoP in several scenarios, showing quite significant improvements over MESON and Metis. 2. leanCoP and Its Calculus leanCoP is an automated theorem prover for classical first-order logic based on a compact Prolog implementation of the clausal connection (tableaux) calculus \cite{15, 19} with several simple strategies that significantly reduce the search space on many problems. In contrast to saturation-based calculi used in most of the state-of-the-art ATPs (E, Vampire, etc.), connection calculi implement goal-oriented proof search. Their main inference step connects a literal on the current path to a new literal with the same predicate symbol but different polarity. The formal definition (derived from Otten \cite{18}) of the particular connection calculus relevant in leanCoP is as follows: **Definition 1.** [Connection calculus] The axioms and rules of the connection calculus are given in Figure 1. The words of the calculus are tuples "C, M, Path" where the clause C is the open subgoal, M is a set of clauses in disjunctive normal form (DNF) transformed from axioms $\sigma$ of conjecture with added nullary predicates to all positive clauses and the active path Path is a subset of a path through M. In the rules of the calculus C, C’ and C’’ are clauses, $\sigma$ is a term substitution, and $L_1$, $L_2$ is a connection with 1. We suppose that $\sharp$ is a new predicate that does not occur anywhere in axioms and conjecture 2. Thus by default all positive clauses are used as possible start clauses. \( \sigma(L_1) = \sigma(L_2) \). The rules of the calculus are applied in an analytic (i.e. bottom-up) way. The term substitution \( \sigma \) is applied to the whole derivation. The connection calculus is correct and complete \( [13] \) in the following sense: A first-order formula \( M \) in clausal form is valid iff there is a connection proof for \( "-\neg M, \{\}" \), i.e., a derivation for \( "-\neg M, \{\}" \) in the connection calculus so that all leaves are axioms. The Prolog predicate `prove/5` implements the axiom, the reduction and the extension rule of the basic connection calculus in `leanCoP`: ```prolog prove([Lit|Cla],Path,PathLim,Lem,Set) :- (\% unify with occurs check (NegL,NegLit) member(NegL,Path), unify_with_occurs_check(NegL,NegLit), \% prove(Cla,[Lit|Path],PathLim,Lem,Set) , \% prove(Cla,Path,PathLim,Lem,Set). prove([],_,_,_,_). ``` The tuple \( "C, M, Path" \) in connection calculus is here represented as follows: - \( C \) representing the open subgoal is a Prolog list `Cla`; - the active path `Path` is a Prolog list `Path`; - \( M \) is written into Prolog's database before the actual proof search starts in a such way that for every clause \( C \in M \) and for every literal \( L \in C \) the fact `lit(Indexing_L,L,C1,Grnd)` is stored, where \( C=C\{'L\} \) and `Grnd` is `n` if `C` is ground, otherwise `Grnd` is `n`. Indexing_L is same as `L` modulo all its variables which are fresh (there is no twice or more occurrences in Indexing_L everywhere in Indexing_L and it is used for fast finding the right fact in database without affecting the logically correct `L` by standard Prolog unification without occurs check. - Atoms are represented by Prolog atoms, negation by `"-"`. - The substitution `\( \sigma \)` is stored implicitly by Prolog. PathLim is the current limit used for iterative deepening, Lem is the list of usable (previously derived) lemmas, Set is a list of options, and Prove is the resulting proof. This predicate succeeds (using iterative deepening) if there is a connection proof for the tuple represented by `Cla`, the DNF representation of the problem stored in Prolog’s database using the `lit` predicate, and a `Path` with `Path \{Path\} \leftarrow PathLim` where `PathLim` is the maximum size of the active `Path`. The predicate works as follows: Line 18 implements the axiom, line 4 calculates the complement of the first literal `Lit` in `Cla`, which is used as the principal literal for the next reduction or extension step. The reduction rule is implemented in lines 7, 8 and 17. At line 7 and 8 it is checked whether the active path `Path` contains a literal `NegLit` that unifies with the complement `NegLit` of the principal literal `Lit`. In this case the alternative lines after the semicolon are skipped and the proof search for the premise of the reduction rule is invoked in line 17. The extension rule is implemented in lines 10, 11, 14 and 17. Lines 10 and 11 are used to find a clause that contains the complement `NegLit` of the principal literal `Lit`. `Cla` is the remaining set of literals of the selected clause and the new open subgoal of the left premise. The proof search for the left premise of the extension rule, in which the active path `Path` is extended by the principal literal `Lit`, is invoked in line 14, and if successful, we again continue on line 17. Compared to standard tableaux or sequent calculi, connection calculi are not confluent\( ^3 \). To achieve completeness, an extensive use of backtracking is required. `leanCoP` uses two simple incomplete strategies (namely options `scut` and `cut`) for restricting backtracking that significantly reduces the search space \( [18] \) without affecting the ability to find proofs in most tested cases (see Section 4). Another major problem in connection calculi is the integration of equality. The paramodulation method that is widely used in saturation-based ATPs is not complete for goal-oriented approach of connection calculi. Therefore equality in `leanCoP` and similar ATPs is usually managed by adding the axioms of equality (reflexivity, symmetry, transitivity and substitutivity). To obtain the clausal form, `leanCoP` uses its own implementation of classifier introducing definitions (the `def` option), which seems to perform better with the `leanCoP`’s core prover than other standard classifiers (`TPTP2X` using the option `-t clausify:tptp` or `FLOTTER` and `E`) or direct transformation into clausal form (`nodef` option in `leanCoP`) \( [18] \). In the following subsections, we summarize several further methods used by `leanCoP` that improve its performance. ### 2.1 Iterative deepening Prolog uses a simple incomplete depth-first search strategy to explore the search space. This kind of incompleteness would result in a calculus that hardly proves any formula. In order to obtain a complete proof search in the connection calculus, iterative deepening on the proof depth, i.e. the size of the active path, is performed. It is achieved by inserting the following lines into the code: ```prolog (12) (! Grnd1 = true ; length(Path,M), ! pathlim = true ; (13) \+ pathlim -> assert(pathlim), fail ), ``` and the whole prover runs in the following iterative sense starting from `PathLim=1`: ```prolog prove(PathLim,Set) :- retract(pathlim) -> PathLim is PathLim+1, prove(PathLim,Set). ``` When the extension rule is applied and the new clause is not ground, i.e. it does not contain any variable, it is checked whether the size \( K \) of the active path exceeds the current path limit `PathLim` \(^3\) Bad choice of connection might end up in dead end (line 12). In this case the dynamic predicate pathlim/0 is written into the Prolog's database (line 13) indicating the need to increase the path limit if the proof search with the current path limit fails. If the proof search fails and the predicate pathlim can be found in the database, then Pathlim is increased by one and the proof search starts again. 2.2 Regularity Condition Optimization Definition 2. A connection proof is regular iff no literal occurs more than once in the active path. Since the active path corresponds to the set of literals in a branch in the connection tableau representation, a connection tableau proof is regular if in the current branch no literal occurs more than once. The regularity condition is integrated into the connection calculus in Figure 4 by imposing the following restriction on the reduction and extension rule: \( \forall L' \in C \cup \{ L \} : \sigma(L') \notin \sigma(\text{Path}) \) Lemma 2.1. A formula \( M \) in clausal form described above is valid iff there is a regular connection proof for \( "\neg \neg M, \{ \} " \) Regularity is correct, since it only imposes a restriction on the applicability of the reduction and extension rules. The completeness proof can be found in [13, 19]. The Prolog predicate \( \langle + \rangle \text{Goal} \) succeeds only if \( \text{Goal} \) cannot be proven. In line 3 the corresponding \( \text{Goal} \) succeeds if the open subgoal \( \{ \text{Lit} \} \text{Cla} \) contains a literal \( \text{LitC} \) that is syntactically identical (built-in predicate \( \text{==/2} \)) with the literal \( \text{LitP} \) in the active path \( \text{Path} \). The built-in predicate \( \text{member/2} \) is used to enumerate all elements of a list. 2.3 Lemmata optimization The set of lemmata is represented by the list \( \text{Lem} \). The lemma rule is implemented by inserting the following lines: \[ \begin{align*} (5) \ & \ \text{member}((\text{L}it,L)\text{em}), \ \text{Lit}==\text{Lit}\text{L}, \\ (6) \ & \ \text{member}((\text{L}it,\text{Path}), \\ \text{L}it\text{C}==\text{Lit}\text{P}), \end{align*} \] In order to apply the lemma rule, the substitution \( \sigma \) is not modified, i.e. the lemma rule is only applied if the list of lemmata \( \text{Lem} \) contains a literal \( \text{LitL} \) that is syntactically identical with the literal \( \text{Lit} \). Furthermore, the literal \( \text{Lit} \) is added to the list \( \text{Lem} \) of lemmata in the (left) premise of the reduction and extension rule by adapting the following line: \[ (15) \ \text{prove}((\text{Cla},\text{Path},\text{PathLim},\{ \text{Lit} \} \text{Lem}),\text{Set}). \] In the resulting implementation, the lemma rule is applied before the reduction and extension rules. 2.4 Restricted backtracking In Prolog the cut (!) is used to cut off alternative solutions when Prolog tries to prove a goal. The Prolog cut is a built-in predicate, which succeeds immediately when first encountered as a goal. Any attempt to re-satisfy the cut fails for the parent goal, i.e. other alternative choices are discarded that have been made from the point when the parent goal was invoked. Consequently, restricted backtracking is achieved by inserting a Prolog cut after the lemma, reduction, or extension rule is applied. It is implemented by inserting the following line into the code: \[ (16) \ \text{member}(\text{cut},\text{Set}) \rightarrow ! \ ; \text{true}. \] Restricted backtracking is switched on if the list \( \text{Set} \) contains the option \( \text{cut} \). The restricted start step is used if the list \( \text{Set} \) includes the option \( \text{scut} \). In this case only the first matching clause to starting \( \neg \neg \text{L} \) literal is used. Restricted backtracking and restricted start step lead to an incomplete proof search. In order to regain completeness, these strategies can be switched off when the search reaches a certain path limit. If the list \( \text{Set} \) contains the option \( \text{comp} \ (\text{Limit}) \), where \( \text{Limit} \) is a natural number, the proof search is stopped and started again without using incomplete search strategies. 3. OCaml Implementation In this section, we first discuss our implementation of leanCoP in OCaml and its integration in HOL Light: the transformation of the higher-order goal to first order and the proof reconstruction. After that we compare our implementation to Harrison’s implementation of MESON. 3.1 leanCoP in OCaml Otten’s implementation of leanCoP uses the Prolog search, backtracking, and indexing mechanisms to implement the connection tableaux proof search. This is a variation of the general idea of using the “Prolog technology theorem prover” (PTTP) proposed by Stickel [23], in which connection tableaux takes a number of advantages from its similarity to Prolog. In order to implement an equivalent program in a functional programming language, one needs to use either an explicit stack for keeping track of the current proof state (including the trail of variable bindings), or the continuation passing style. We choose to do the former, namely we add an explicit todo (stack), subst (trail) and off (offset in the trail) arguments to the main prove function. The stack keeps a list of tuples that are given as arguments to the recursive invocations of prove, whose full OCaml declaration (taking the open subgoal as its last argument) then looks as follows: \[ \text{let rec prove off subst path limit lemmas todo = function} \\ | [] \rightarrow \text{begin} \ldots \text{end} \\ | ((\text{lit}1 :: \text{rest}_\text{clause}) \text{as clause}) \rightarrow \ldots ; \] The function performs the proof search to the given depth, and if a proof has not been found, it returns the unit. It takes special attention to traverse the tree in the same order as the Prolog version. In particular, when the global option "cut" (restricting backtracking) is off, it performs all the backtrackings explicitly, while if "cut" is on, the parts of backtracking avoided in Prolog are also omitted. When a proof is found, the exception 'Solved' is raised: no further search is performed and the function exits with this exception. The OCaml version and the Prolog version (simplified and with symbols renamed for clarity of comparison) are displayed together in Fig 2. The algorithm proceeds as follows: 1. If nonempty, decompose the current open subgoal into the first literal lit and the rest cla. 2. Check for intersection between the current open subgoal and path. 3. Compute the negation of lit Available online at [http://cl-informatik.uibk.ac.at/users/cek/cppl] 1 let rec prove path lim lem stack = function (lit :: cla) -> prove([Lit|Cla],Path,PathLim,[Lit|Lem],Set) :: 2 \+ (member(LitC,[Lit|Cla]), member(LitP,Path), 3 LitC=LitP), 4 (-NegLit=Lit; -Lit=NegLit) -> 5 member(LitL,Lem), Lit=LitL ; 6 member(NegL,Path), 7 unify_with_occurs_check(NegL,NegLit) ; 8 lit(NegLit,NegL,Cla1,Grnd1), 9 unify_with_occurs_check(NegL,NegLit), 10 ( Grnd1=g -> true ; 11 length(Path,K), K<PathLim -> true ; 12 \+ pathlim -> assert(pathlim), fail ), 13 prove(Cla,[Lit|Path],PathLim,Lem,Set) 14 ), ( member(set,Cut) -> ! ; true ), 15 prove(Cla,Path,PathLim,[Lit|Lem],Set). 16 prove([],\ldots,\ldots,\ldots,\ldots). Figure 2. The simplified OCaml and Prolog code side by side. The explicit trail argument and the computation of the resulting proof have been omitted for clarity, and some symbols were renamed to better correspond to each other. White-space and order of clauses has been modified to exemplify corresponding parts of the two implementations. Function `substeq` checks equality under the current context of variable bindings. Note that the last-but-one line in the Prolog code was merged into each of the three cases in the OCaml code. See the function `prove` in file `leanCoP.ml` on our web page for the actual (non-simplified) OCaml code. 4. Check if `lit` is among the lemmas `lem`, if so try to prove `cla`. If `cut` is set, no other options are tried. 5. For each literal on the path, if `neglit` unifies with it, try to prove `cla`. If the unification succeeded and `cut` is set, no other options are tried. 6. For each clause in the matrix, try to find a literal that unifies with `neglit`, and then try to prove the rest of the newly created subgoal and the rest of the current open subgoal. If the unification and the first proof succeeded and `cut` is set, no other options are tried. 7. When the current open subgoal is empty, the subproof is finished (the axiom rule). In Otten’s implementation, the behaviour of the program with `cut` set is enabled by the use of the Prolog `cut (!)`. Implementing it in OCaml amounts to a different mechanism in each of the three cases. In point 4 in the enumeration above, given that a single lemma has been found, there is no need to check for other lemmas. Therefore a simple `List.exists` call is sufficient to emulate this behaviour in OCaml. No backtracking over other possible occurrences of the lemma is needed here, and it is not necessary to add in this case the literal again into the list of lemmas as is done in the Prolog code (last-but-one line). In point 5, multiple literals on the path may unify with different substitutions. We therefore use a list fold mechanism which changes the value whenever the unification is successful and `cut` is set. Three different cases arise: (i) If `cut` is set, the `Cut` exception is raised with a depth level. The exception is handled only at the appropriate level. This directly corresponds to the semantics of the cut operator in Prolog.\[26\] What remains to be implemented is efficient equality checking and unification. Since we want to integrate our mechanism in HOL Light, we reuse the first order logic representation used in the implementation of HOL Light’s MESON procedure: the substitutions are implemented as association lists, and applications of substitutions are delayed until an actual equality check or a unification step. 3.2 leanCoP in HOL Light In order to enable the OCaml version of leanCoP as a proof tactic and procedure in HOL Light, we first need to transform a HOL goal to a leanCoP problem and when a proof has been found we replay the proof in higher-order logic. In order to transform a problem in higher-order logic to first-order logic without equality, we mostly reuse the steps of the transformation already used by MESON, additionally ensuring that the conjecture is separated from the axioms to preserve leanCoP’s goal-directed approach. The transformation starts by assuming the duplicated copies of polymorphic theorems to match the existing assumptions. Next the goal `axioms → conjecture` is transformed to `(axioms ∧ \$) → (conjecture ∧ \$)` with the help of a special symbol, which we define in HOL as: \$ = T. Since the conjecture is refuted and the problem is converted to CNF, the only positive occurrence of \$ is present in a singleton clause, and the negative occurrences of \$ are present in every clause originating from the conjecture. The CNF clauses directly correspond to the DNF used by the Prolog version of leanCoP. Since no steps distinguish between the positivity of literals, the two can be used interchangeably in the proof procedure. We start the FOL algorithm by trying to prove \$\neg$. Since the final leanCoP proof may include references to lemmas, the reconstruction cannot be performed the same way as it is done in MESON. There, a tree structure is used for finished proofs. Each subgoal either closes the branch (the literal is a negation of a literal already present on the path) or is a branch extension with a (possibly empty) list of subgoals. In leanCoP, each subgoal can refer to previous subgoals, so the order of the subgoals becomes important. We therefore flatten the tree to a list, which needs to be traversed in a linear order to reconstruct the proof. We define a type of proof steps, one for each proof step in the calculus. Each application of a lemma step or path step constructs a proof step with an appropriate first-order term. For an application of a tableau extension step we use Harrison’s contrapositive mechanism: we store the reference to the actual transformed HOL theorem whose conclusion is a disjunction together with the number of the disjunct that got resolved. ``` let rec proof = Lem of fol_atom | | Res of fol_atom * (int * thm);; ``` A list of such proof steps together with a final substitution and an initially empty list of already proved lemmas are the complete input to the proof reconstruction procedure. The reconstruction procedure always looks at the first step on the list. First, a HOL term is constructed from the FOL term with the final substitution applied. This step is straightforward, as it amounts to reversing the mapping of variables and constants applied to transform the HOL CNF to FOL CNF, with new names invented for new variables. Next, we analyze the kind of the step. If the step is a path step, the theorem \( \text{tm} \vdash \text{tm} \) is returned, using the HOL ASSUME proof rule. If the step is a lemma step, the theorem whose conclusion is equal to \( \text{tm} \) is found on the list of lemmas and returned. Finally, if the proof step is an extension step, we first find the disjuncts of the HOL theorem in the proof step apart from the one that got matched. We then fold over this list, at every step calling the reconstruction procedure recursively with the remaining proof steps and the list of lemmas extended by each of the calls. The result of the fold is the list of theorems \( \vdash \text{tm}_1, \vdash \text{tm}_2, \ldots, \vdash \text{tm}_n \) which gets matched with the contrapositive theorem \( \vdash \text{tm}_1 \land \ldots \land \text{tm}_n \rightarrow \text{tm}_0 \) using the HOL proof rule MATCH\(\_\text{MP}\) to obtain the theorem \( \vdash \text{tm}_0 \). Finally, by matching this theorem to the term \( \text{tm} \) the theorem \( \vdash \text{tm} \) is obtained. As the reconstruction procedure traverses the list, it produces the theorem that corresponds to the first goal, namely \( \vdash -\varphi \). By unfolding the definition of \( \varphi \), we obtain \( \vdash \bot \) which concludes the refutation proof. ### 3.3 Comparison to MESON The simplified OCaml code of the core HOL Light’s MESON algorithm as described in [5] is as follows: ``` let rec mexpand rules ancestors g cont (env,n,k) = if n < 0 then failwith "too deep" else try tryfind (fun a -> cont (unify_literals env (g,negate a),n,k)) ancestors with Failure -> tryfind (fun rule -> let (asm,c),k = renamerule k rule in itlist (mexpand rules (g:ancestors)) asm cont (unify_literals env (g,c),n - length asm, k }) rules;; let puremeson fm = let cls = simpnf(specialize(pnf fm)) in let rules = itlist ((@) @@ contrapositives) cls [] in deepen (fun n -> mexpand rules [] False (fun x -> x) (undefined,n,0); n) 0;) ``` The toplevel puremeson function proceeds by turning the input formula into a clausal form, making contrapositives (rules) from the clauses, and then repeatedly calling the mexpand function with these rules using iterative deepening over the number of nodes permitted in the proof tree. The mexpand function takes as its arguments the rules, an (initially empty) list of goal ancestors, the goal \( g \) to prove (initially False, which was also added to all-negative clauses when creating contrapositives), a continuation function \( \text{cont} \) for solving the rest of the subgoals (initially the identity), and a tuple consisting of the current trail \( \text{env} \), the number \( n \) of additional nodes in the proof tree permitted, and a counter \( k \) for variable renaming. If the allowed node count is not negative, mexpand first tries to unify the current goal with a negated ancestor, followed by calling the current continuation (trying to solve the remaining goals) with the extended trail. If all such unification/continuation attempts fail (i.e., they throw Failure), an extension step is tried with all rules. This means that the head of a (renamed) rule is unified with the goal \( g \), the goal is appended to the ancestors and the mexpand is called (again using list folding with the subsequently modified trail and continuation) for all the assumptions of the rule, decreasing the allowed node count for the recursive calls. The full HOL Light version of MESON additionally uses a smarter (divide-and-conquer) policy for the size limit, checks the goal for being already among the ancestors, caches continuations, and uses simple indexing. Below we enumerate some of the most important differences between the leanCoP algorithm and MESON and their implementations in HOL Light. Their practical effect is measured in Section 4. - **leanCoP** computes and uses lemmas. The literals that correspond to closed branches are stored in a list. Each call to the main `prove` function additionally looks for the first literal in the list of lemmas. This can cost a linear number of equality checks if no parts of the proof are reused, but it saves computations if there are repetitions. - Both algorithms use iterative deepening; however the depth and termination conditions are computed differently. - **MESON** is implemented in the continuation-passing-style, so it can use an additional optimization caching of the continuations. If any continuations are repeated (at the same depth level), the subproof is not retried. Otten’s `leanCoP` uses a direct Prolog implementation which cannot (without further tricks) do such repetition elimination. The implementation of `leanCoP` in OCaml behaves the same. - `leanCoP` may use the cut after the lemma step, path step or successful branch closing in the extension step. Implementing this behaviour in OCaml exactly requires multiple `Cut` exceptions – one for each depth of the proof. - The checking for repetitions is done in a coarser way in `MESON` than in `leanCoP`, allowing `leanCoP` to skip some work done by `MESON`. - The search is started differently in `leanCoP` and in `MESON`. `leanCoP` starts with a conjecture clause, which likely contributes to its relatively good performance on larger problems. ### 4. Experimental Setup and Results For the experiments we use HOL Light SVN version 199 (September 2014), Metis 2.3, and `leanCoP` 2.1. Unless noted otherwise, the systems are run on a 48-core server with AMD Opteron 6174 2.2 GHz CPUs, 320 GB RAM, and 0.5 MB L2 cache per CPU. Each problem is always assigned one CPU. The systems are compared on several benchmarks, corresponding to different modes of use: goals coming from HOL Light itself, the general set of problems from the TPTP library, and the large- theory problems extracted from Mizar [28]. The first set of prob- lems is important, however since these problems come from HOL Light itself, they are likely naturally biased towards MESON. The Mizar problems come from the two MPTP-based benchmarks: the MPTP Challenge and MPTP2078 [1]. These are large-theory prob- lems coming from a different ITP, hence they do not introduce the implicit bias as the HOL Light problems, while coming from a more relevant application domain than the general TPTP problems. For HOL Light, we evaluate (with 5 second time limit) on two sets of problems. First, we look at 872 MESON-solved HOL Light goals that were made harder by removing splitting. In this scenario the tactic is applied to a subgoal of a proof, which is a bit similar to the Judgement-Day [3] evaluation used for Isabelle/Sledgehammer, where the goals are however restricted to the solvable ones. Table 1 shows the results. Second, we evalu- ate on the top-level goals (with their dependencies already minimized) that have been solved with the HOL Light system [11], i.e., by using the strongest available ATPs. This set is important because tactics such as MESON, Metis and now also leanCoP can be tried as a first cheap method for reconstructing the proofs found by the stronger ATPs. The results are shown in Table 2. In both cases, the OCaml implementation of leanCoP performs best, improving the MESON’s performance in the first case by about 11%, and improving on Metis on the second set of problems by about 45%. Table 1 shows the results of the evaluation on all 7036 FOF problems coming from TPTP 6.0.0, using 10 second time limit. Here the difference to Metis is not so significant, probably be- cause Metis implements ordered paramodulation, which is useful for many TPTP problems containing equality. The improvement over MESON is about 17%. Table 3 and Table 4 show the results on the small (heuristically minimized) and large MPTP Challenge problems. The best version of the OCaml implementation of lean- CoP improves by 54% on Metis and by 90% on MESON on the small problems, and by 88% on Metis and 100% on MESON on the large problems. Here the goal directedness of leanCoP is prob- ably the main factor. Finally, to get a comparison also with the best ATPs on a larger ITP-oriented benchmark (using different hardware), we have done a 10s evaluation of several systems on the newer MPTP2078 bench- mark (used in the 2012 CASC@Turing competition), see Table 5 and Table 6. The difference to Metis and MESON on the small problems is still quite significant (40% improvement over ME- SON), while on the large problems the goal-directedness again shows even more (about 90% improvement). While Vampire’s (version 2.6) SInE heuristic [6] helps a lot on the larger prob- lems [29], the difference there between E (1.8) and our version of leanCoP is not so great as one could imagine given the several or- ders of magnitude difference in the size of their implementations. ### Table 1. Core HOL Light MESON calls without splitting (872 goals), 5sec per goal <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>mllleancop-cut-comp</td> <td>759 (87.04)</td> <td>2</td> </tr> <tr> <td>mllleancop-nocut</td> <td>759 (87.04)</td> <td>2</td> </tr> <tr> <td>pllleancop-cut</td> <td>752 (86.23)</td> <td>0</td> </tr> <tr> <td>pllleancop-nocut</td> <td>751 (86.12)</td> <td>0</td> </tr> <tr> <td>metis-23</td> <td>708 (81.19)</td> <td>26</td> </tr> <tr> <td>meson</td> <td>683 (78.32)</td> <td>4</td> </tr> <tr> <td>any</td> <td></td> <td>832 (95.41)</td> </tr> </tbody> </table> ### Table 2. HOL Light dependencies (1556 goals, 5sec) <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>mllleancop-cut-comp</td> <td>1178 (75.70)</td> <td>12</td> </tr> <tr> <td>mllleancop-nocut</td> <td>1162 (74.67)</td> <td>0</td> </tr> <tr> <td>meson</td> <td>1110 (71.33)</td> <td>39</td> </tr> <tr> <td>pllleancop-nocut</td> <td>1085 (69.73)</td> <td>0</td> </tr> <tr> <td>pllleancop-cut</td> <td>1084 (69.66)</td> <td>0</td> </tr> <tr> <td>metis-23</td> <td>814 (52.31)</td> <td>16</td> </tr> <tr> <td>any</td> <td></td> <td>1260 (80.97)</td> </tr> </tbody> </table> ### Table 3. TPTP (7036 goals with at least one conjecture, 10sec) <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>pllleancop-cut-conj</td> <td>1669 (23.72)</td> <td>73</td> </tr> <tr> <td>pllleancop-cut</td> <td>1648 (23.42)</td> <td>21</td> </tr> <tr> <td>pllleancop-cut-conj</td> <td>1622 (23.05)</td> <td>34</td> </tr> <tr> <td>mllleancop-cut</td> <td>1571 (22.32)</td> <td>9</td> </tr> <tr> <td>meson</td> <td>1562 (22.20)</td> <td>261</td> </tr> <tr> <td>mllleancop-nocut</td> <td>1430 (20.32)</td> <td>28</td> </tr> <tr> <td>pllleancop-nocut</td> <td>1358 (19.30)</td> <td>25</td> </tr> <tr> <td>mllleancop-nocut-conj</td> <td>1158 (16.45)</td> <td>3</td> </tr> <tr> <td>any</td> <td></td> <td>2433 (34.57)</td> </tr> </tbody> </table> ### Table 4. Bushy (small) MPTP Challenge problems (252 in total), 10sec <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>pllleancop-cut-conj</td> <td>103 (40.87302)</td> <td>2</td> </tr> <tr> <td>pllleancop-cut</td> <td>99 (39.28571)</td> <td>8</td> </tr> <tr> <td>mllleancop-cut-conj</td> <td>91 (36.1111)</td> <td>2</td> </tr> <tr> <td>mllleancop-cut</td> <td>79 (31.34921)</td> <td>0</td> </tr> <tr> <td>mllleancop-nocut</td> <td>76 (30.15873)</td> <td>0</td> </tr> <tr> <td>pllleancop-nocut</td> <td>62 (24.60317)</td> <td>1</td> </tr> <tr> <td>meson</td> <td>59 (23.41270)</td> <td>3</td> </tr> <tr> <td>meson-infer</td> <td>48 (19.04762)</td> <td>0</td> </tr> <tr> <td>any</td> <td></td> <td>124 (49.20635)</td> </tr> </tbody> </table> ### Table 5. Chainy (large) MPTP Challenge problems (252 in total), 10sec <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> <th>Unique</th> </tr> </thead> <tbody> <tr> <td>pllleancop-cut-conj</td> <td>61 (24.20635)</td> <td>5</td> </tr> <tr> <td>mllleancop-cut-conj</td> <td>60 (23.80952)</td> <td>9</td> </tr> <tr> <td>pllleancop-cut</td> <td>57 (22.61905)</td> <td>4</td> </tr> <tr> <td>mllleancop-nocut</td> <td>47 (18.65079)</td> <td>0</td> </tr> <tr> <td>mllleancop-cut</td> <td>47 (18.65079)</td> <td>0</td> </tr> <tr> <td>meson</td> <td>32 (12.69841)</td> <td>3</td> </tr> <tr> <td>meson-infer</td> <td>30 (11.90476)</td> <td>0</td> </tr> <tr> <td>pllleancop-nocut</td> <td>26 (10.31746)</td> <td>0</td> </tr> <tr> <td>any</td> <td></td> <td>83 (32.93651)</td> </tr> </tbody> </table> Table 6. Bushy (small) MPTP2078 problems (2078 in total), 10sec <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> </tr> </thead> <tbody> <tr> <td>Vampire</td> <td>1198 (57.65)</td> </tr> <tr> <td>e18</td> <td>1022 (49.18)</td> </tr> <tr> <td>mlleancop-cut-conj</td> <td>613 (29.49)</td> </tr> <tr> <td>pllean-cut-conj</td> <td>597 (28.72)</td> </tr> <tr> <td>metis-23</td> <td>564 (27.14)</td> </tr> <tr> <td>mlleancop-cut</td> <td>559 (26.90)</td> </tr> <tr> <td>pllean-cut</td> <td>544 (26.17)</td> </tr> <tr> <td>pllean-comp7</td> <td>539 (25.93)</td> </tr> <tr> <td>mlleancop-nocut</td> <td>521 (25.07)</td> </tr> <tr> <td>pllean-nc</td> <td>454 (21.84)</td> </tr> <tr> <td>meson-infer</td> <td>438 (21.07)</td> </tr> <tr> <td>any</td> <td>1277 (61.45)</td> </tr> </tbody> </table> Table 7. Chainy (large) MPTP2078 problems (2078 in total), 10sec <table> <thead> <tr> <th>Prover</th> <th>Theorem (%)</th> </tr> </thead> <tbody> <tr> <td>Vampire</td> <td>634 (30.51)</td> </tr> <tr> <td>e18</td> <td>317 (15.25)</td> </tr> <tr> <td>mlleancop-cut-conj</td> <td>243 (11.69)</td> </tr> <tr> <td>pllean-cut-conj</td> <td>196 (9.43)</td> </tr> <tr> <td>pllean-cut</td> <td>170 (8.18)</td> </tr> <tr> <td>pllean-comp7</td> <td>159 (7.65)</td> </tr> <tr> <td>mlleancop-nocut</td> <td>150 (7.21)</td> </tr> <tr> <td>mlleancop-cut</td> <td>146 (7.02)</td> </tr> <tr> <td>meson-infer</td> <td>145 (6.97)</td> </tr> <tr> <td>metis-23</td> <td>138 (6.64)</td> </tr> <tr> <td>pllean-nc</td> <td>126 (6.06)</td> </tr> <tr> <td>any</td> <td>693 (33.34)</td> </tr> </tbody> </table> 5. Conclusion We have implemented an OCaml version of the leanCoP compact connection prover, and the reconstruction of its proofs inside HOL Light. This proof-reconstruction functionality can be also used to certify in HOL Light a redundant TPTP proof produced by leanCoP, thus turning leanCoP into one of the few ATPs whose proofs enjoy LCF-style verification in one of the safest LCF-based systems. The performance of the OCaml version on the benchmarks is comparable to the Prolog version, while it always outperforms Metis and MESON, sometimes very significantly on the relevant ITP-related benchmarks. We provide a HOL Light interface that is identical to the one offered by MESON, namely we provide two tactics and a rule. LEANCOP_TAC and ASM_LEANCOP_TAC are given a list of helper theorems, and then try to solve the given goal together (or the given goal and assumptions, respectively). The LEANCOP rule, given a list of helper theorems acts as a conversion, i.e., given a term statement it tries to prove a theorem whose conclusion is identical to that of the term. The benchmarks show that these are likely the strongest single-step proof-reconstructing first-order tactics available today in any ITP system. Acknowledgments Supported by the Austrian Science Fund (FWF): P26201. References
{"Source-Url": "https://repository.ubn.ru.nl/bitstream/handle/2066/139732/139732.pdf?sequence=1", "len_cl100k_base": 10573, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 32931, "total-output-tokens": 12994, "length": "2e13", "weborganizer": {"__label__adult": 0.0004248619079589844, "__label__art_design": 0.0006608963012695312, "__label__crime_law": 0.0006918907165527344, "__label__education_jobs": 0.001750946044921875, "__label__entertainment": 0.0001913309097290039, "__label__fashion_beauty": 0.00024080276489257812, "__label__finance_business": 0.0005235671997070312, "__label__food_dining": 0.000560760498046875, "__label__games": 0.001331329345703125, "__label__hardware": 0.000988006591796875, "__label__health": 0.0008420944213867188, "__label__history": 0.000568389892578125, "__label__home_hobbies": 0.0001475811004638672, "__label__industrial": 0.0009059906005859376, "__label__literature": 0.0006227493286132812, "__label__politics": 0.0006465911865234375, "__label__religion": 0.0008459091186523438, "__label__science_tech": 0.32421875, "__label__social_life": 0.00018167495727539065, "__label__software": 0.0126190185546875, "__label__software_dev": 0.6494140625, "__label__sports_fitness": 0.0004553794860839844, "__label__transportation": 0.0007781982421875, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43456, 0.05883]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43456, 0.2675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43456, 0.84499]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5231, false], [5231, 10921, null], [10921, 17578, null], [17578, 22652, null], [22652, 29702, null], [29702, 35714, null], [35714, 41714, null], [41714, 43456, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5231, true], [5231, 10921, null], [10921, 17578, null], [17578, 22652, null], [22652, 29702, null], [29702, 35714, null], [35714, 41714, null], [41714, 43456, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43456, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43456, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5231, 2], [5231, 10921, 3], [10921, 17578, 4], [17578, 22652, 5], [22652, 29702, 6], [29702, 35714, 7], [35714, 41714, 8], [41714, 43456, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43456, 0.22701]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
00645f978dbc7d0ffb3119fb70fb1edf4b6fbe68
Preserving User Data Privacy through the Development of an Android Solid Library Alexandria Lim Follow this and additional works at: https://scholarworks.uark.edu/csceuht Part of the Computational Engineering Commons, Data Storage Systems Commons, and the Digital Communications and Networking Commons Citation This Thesis is brought to you for free and open access by the Computer Science and Computer Engineering at ScholarWorks@UARK. It has been accepted for inclusion in Computer Science and Computer Engineering Undergraduate Honors Theses by an authorized administrator of ScholarWorks@UARK. For more information, please contact scholar@uark.edu. Preserving User Data Privacy through the Development of an Android Solid Library Preserving User Data Privacy through the Development of an Android Solid Library An Undergraduate Honors College Thesis in the Department of Computer Science and Computer Engineering College of Engineering University of Arkansas Fayetteville, AR May, 2023 by Alexandria Lim Abstract In today’s world where any and all activity on the internet produces data, user data privacy and autonomy are not prioritized. Companies called data brokers are able to gather data elements of personal information numbering in the billions. This data can be anything from purchase history, credit card history, downloaded applications, and service subscriptions. This information can be analyzed and inferences can be drawn from analysis, categorizing people into groups that range in sensitivity — from hobbies to race and income classes. Not only do these data brokers constantly overlook data privacy, this mass amount of data makes them extremely vulnerable to data breaches. To solve both of these problems of data privacy and security, one can adopt the Solid framework which prioritizes user data autonomy and allows users to choose their own applications and services. While there currently does not exist any technological support for Android, the objective of this thesis will be to start to develop an Android Solid library which will encourage the adoption of the Solid framework. THESIS DUPLICATION RELEASE I hereby authorize the University of Arkansas Libraries to duplicate this thesis when needed for research and/or scholarship. Agreed ________________________________ Alexandria Lim Refused ________________________________ Alexandria Lim ACKNOWLEDGEMENTS I would like to first thank the University of Arkansas Honors College for supporting this research through the Honors College Research Grant, and the Honors College fellowship for funding my studies during my time here. I would like to thank my advisor Dr. Alexander Nelson for mentoring and guiding me through this work. I would also like to thank my thesis committee — Dr. Alexander Nelson, Dr. Pat Parkerson, and Dr. Dale Thompson. Finally, I would like to thank Zach Grider for being the graduate student mentor of the research topic during the project. # TABLE OF CONTENTS Abstract ................................................................. ii Acknowledgements ....................................................... iv Table of Contents ....................................................... v List of Figures ............................................................ vi List of Tables ............................................................. vii 1 Introduction .......................................................... 1 2 Background ............................................................ 3 2.1 Pilot Studies in Protecting Personal Telemetry ................. 3 2.1.1 Data Analysis of Personal Telemetry ....................... 3 2.1.2 Fully Homomorphic Encryption and IoT Sensors .......... 4 2.1.3 Semantic Web Project Solid ............................... 6 2.2 Background Information for the Solid Protocol ................. 7 2.2.1 Solid Protocol Terms ....................................... 7 2.2.2 Solid Data Exchange ....................................... 8 2.2.3 Solid Resources for Android ............................. 9 3 Implementation and Architecture .................................. 11 4 Results ..................................................................... 17 5 Discussion .............................................................. 29 6 Conclusion .............................................................. 31 6.1 Summary .......................................................... 31 6.2 Future Work ...................................................... 31 Bibliography .............................................................. 32 <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>2.1</td> <td>Total Activity Count</td> <td>4</td> </tr> <tr> <td>2.2</td> <td>Sigmoid Function</td> <td>6</td> </tr> <tr> <td>2.3</td> <td>Solid WebID</td> <td>7</td> </tr> <tr> <td>2.4</td> <td>Solid OIDC Flow</td> <td>8</td> </tr> <tr> <td>3.1</td> <td>List of Identity Providers</td> <td>11</td> </tr> <tr> <td>3.2</td> <td>OIDC Configuration File</td> <td>13</td> </tr> <tr> <td>3.3</td> <td>Android Manifest File</td> <td>13</td> </tr> <tr> <td>4.1</td> <td>Inrupt Login Screen</td> <td>19</td> </tr> <tr> <td>4.2</td> <td>Inrupt Login Success</td> <td>20</td> </tr> <tr> <td>4.3</td> <td>Update File List</td> <td>21</td> </tr> <tr> <td>4.4</td> <td>View File Activity</td> <td>22</td> </tr> <tr> <td>4.5</td> <td>Image View Activity</td> <td>23</td> </tr> <tr> <td>4.6</td> <td>Delete Activity</td> <td>24</td> </tr> <tr> <td>4.7</td> <td>Create Activity</td> <td>25</td> </tr> <tr> <td>4.8</td> <td>Create New File</td> <td>26</td> </tr> <tr> <td>4.9</td> <td>Updated Files after Create Activity</td> <td>27</td> </tr> </tbody> </table> LIST OF TABLES Table 4.1: Statistics for Post Requests . . . . . . . . . . . . . . . . . . . . . . 28 Table 4.2: Statistics for Delete Requests . . . . . . . . . . . . . . . . . . . . 28 1 Introduction “Who owns my data?” This question is complicated, and there is often no one definitive answer. Data brokers like Acxiom, Corelogic, and Datalogix would argue that their company owns your data. Data brokers are companies that collect consumers’ personal information and resell or share that information with others, and have come under scrutiny in recent years. There are several problematic findings in a 2014 report from the Federal Trade Commission about the practices of data brokers. First, data brokers collect data numbering in the billions “largely without consumers’ knowledge”. Then, those data brokers “combine and analyze data about consumers to make inferences about them”, creating intimate digital profiles of everyone who uses the internet. These inferences may range from “dog owner” to classifications of ethnicity and income levels such as “Urban Scramble”, which “include a high concentration of Latinos and African Americans with low incomes”, or “Married Sophisticates”, who are childless, upper-class “thirty-something couples” [1]. There are two main problems that arise from these practices. First, there is a data privacy issue. The general public is “largely unaware that data brokers are engaging in these practices” and how this data is used “may be difficult to find and understand” [1]. Reportedly, “non-financial data” is analyzed to create “pseudo credit-scores/consumer-scores” and in the worst-case scenario, “the credit score of an individual can severely affect one’s ability to secure a loan at a particular interest rate and a poor credit score could result in an individual being denied a loan or being refused a job offer” [2]. Second, there is a data security issue with the scale of the mass data aggregation in only a few locations. Data breaches are severe issues that are happening more frequently and across all industries. According to the Privacy Journal, the biggest breach of personal data of the 21st century happened on September 7, 2017, at Equifax [3]. Almost 145 million US consumer accounts were affected, and records of “social security number, date of birth, full name, and driver license numbers” and information on “credit card numbers, salary history, and loans” were compromised [4]. Anthem, the “second-largest health insurer in the U.S.”, lost 78.8 million medical records. In May 2014, eBay experienced a cyberattack and “names, addresses, dates of birth, and encrypted passwords of all of its 145 million users” were exposed. JPMorgan Chase, which is the largest bank in America, also experienced a breach, but “no money was lost and no sensitive personal information compromised.” [3] Data brokers are able to do these practices due to the mass amount of data provided to them from companies such as Facebook, Google, or any mobile or web application that has access to user data. Data can be generated from any activity including “using a mobile device, shopping for a home or car, subscribing to a magazine, making a purchase at a store or through a catalog, browsing the Internet, responding to a survey in order to get a coupon, using social media, subscribing to online news sites, or entering a sweepstakes.” Many technology companies that users interact with “collect information about them and, in many instances, provide or sell that information to data brokers” [1]. From these practices it is clear that there are problems with the way that modern day technology handles user data. Companies should not be storing large amounts of user data due to the risk of data breaches. Companies should also not have access to as much data as they want because of the violation of user privacy. Companies should not be able to do whatever they wish with the user data such as selling it to data brokers whose practices were previously described. Users should be able to have more autonomy over their own data. However, balancing the need to preserve user data privacy while providing data for the third parties to analyze and deliver value poses a significant challenge. 2 Background 2.1 Pilot Studies in Protecting Personal Telemetry Three pilot studies were conducted that led to the formulation of this thesis. 2.1.1 Data Analysis of Personal Telemetry Personal telemetry collects information that pertains to the physical actions or environment of a person themselves. Frequent collectors of personal telemetry are applications that have access to data from a mobile smartphone’s accelerometer, GPS, and heartrate sensors. There are many applications that perform these kinds of operations — Google Fit, Fitbit, and smartwatch applications. As mentioned in the introduction, the companies that own these applications are most likely selling this information to data brokers as well. It might seem that the raw sensor data could not be as directly incriminating as a Google search history or credit score. However, analyzing that personal telemetry data is likely to expose “contexts”. For example, through analyzing the user’s location data or their accelerometer on their phones, it can be possible to determine where the user has been as well as what activities they did over a long period of time, which are potentially sensitive pieces of information. To investigate the extent of how much information could be gained from analyzing raw personal telemetry data, there was a study in which various datasets of pregnant mothers and their daily activity were analyzed. A literature review of how to analyze accelerometer data was performed to find a consensus for what the definition of “sedentary” means. Analyzing 6 different sources led to the conclusion that “sedentary” activity means an activity count — or minute by minute observations of recordings — of less than 100 counts per minute [5], [6], [7], [8], [9], [10]. From the dataset of the physical activity of pregnant mothers, knowledge can be gleaned of what these women might be doing depending on the time of day, how active they might be, and other information that could be considered potentially sensitive. As an example of analyzed data from this study, in Figure 2.1 the percent zero over total activity count could show how inactive these women were. From this study, it can be inferred that through analyzing even raw personal telemetry sensor data there is still leakage of potentially sensitive information, and that there needs to be a way to analyze this data while keeping user data private. 2.1.2 Fully Homomorphic Encryption and IoT Sensors One method for preserving data privacy is to keep total obscurity between the user and the third party that wishes to analyze the user’s data by way of encryption. Traditional encryption obscures the true contents of a piece of data by converting the plaintext data into a ciphertext, which can only be readable if the private key in symmetric encryption or the secret key in asymmetric encryption is used. This fulfills the need to keep the contents of data private between trusted parties but fails to provide untrusted parties a way to access data’s contents without violating user privacy. However, there is a special type of encryption called fully homomorphic encryption that allows for encrypted data to be processed and produces an encrypted answer that, when decrypted, matches the answer as if the unencrypted data were processed directly [11]. So theoretically, a user could encrypt all of their own data with fully homomorphic encryption and send their data to be processed by a third party, who would process the received ciphertexts and return the post-computed ciphertexts, which the user could decrypt. Thus, the second pilot study conducted was to attempt to create a proof-of-concept IoT system that encrypts all data values using fully homomorphic encryption before processing, so as to preserve user privacy. The IoT system used the PySEAL library [12], which is an open-source library that has functions to encode and encrypt plaintexts, perform arithmetic operations such as “multiply” and “add” to ciphertexts, and finally decrypt and decode ciphertexts to get the plaintext answers. Within this IoT system, there were multiple sensors that collected personal telemetry such as a light and presence sensor, as well as a Raspberry Pi that served as the aggregation point. The sensors transmitted their data through MQTT to the Raspberry Pi, which encrypted the data values from the sensors, and send the ciphertexts to the “untrusted cloud”, which in this system was a computer. The computer performed computations on the ciphertexts and sent back the results to the Raspberry Pi. The Raspberry Pi could decrypt the ciphertexts and depending on the results of a sigmoid function which places analog values as a discrete value ranging from 0 to 1, the sensors would perform an action. When attempting to perform a sigmoid function as seen in Figure 2.2 using the sensor’s data values, which involve cubing and multiplying to the 5th power, the ciphertext became too corrupted with “noise” to decrypt back into any usable result. Thus, devices such as smartwatches or smartphones probably would not be able to handle the resource heavy computations necessary for performing fully homomorphic encryption. Since fully homomorphic encryption’s inception by Craig Gentry in 2009, there has been work done to lessen the computational burden. There is a method called bootstrapping, which resets the amount of noise when computing on cipher- texts [11]. However, as of now, fully homomorphic encryption is not a good fit for protecting personal telemetry of users. 2.1.3 Semantic Web Project Solid Lastly, the use of the Solid framework was considered. The Solid framework is an idea from Tim Berners-Lee, the father of the internet. Berners-Lee was concerned with the mass aggregation of data on the web between only a couple of tech giants and noted that users had to give away their data for “perceived value” of the applications. To further develop his vision for an Internet that would prioritize decentralized information and user data privacy and sovereignty, Sr. Berners- Lee founded the open-source project Solid, and later founded a startup company called Inrupt. Inrupt’s current pilot projects include partnerships with “Britain’s National Health Service and with the government of Flanders, the Dutch-speaking region of Belgium” [13]. Within the Solid framework, all users have a personal pod. One can store any kind of data in a pod and all data is interoperable, meaning that different applications can all work with the same data. The user can grant specific access to each piece of data in their pod. Access may also be revoked at any time. This gives users control over their own data autonomy. Solid prioritizes the values of “an equitable, informed and interconnected society” and the development of a space where users “maintain their autonomy, control their data and privacy, and choose applications and services to fulfil their needs” [14]. 2.2 Background Information for the Solid Protocol This section will detail basic information about the Solid specification. This information comes from the Solid Specification document [14]. 2.2.1 Solid Protocol Terms Each data resource in the Solid pod can be represented as an RDF document with a specific URI (Unique Resource Identifier). All of the data in the pods are represented in linked data format. Each person, organization, or software is represented with a WebID. Each WebID is unique and is the primary identifier within the SOLID ecosystem. A WebID is an HTTP URI that resolves to an RDF document representation of the WebID profile which shows the location of personal storage links and the authentication endpoints, as shown in Figure 2.3 [14]. ``` @prefix foaf: <http://xmlns.com/foaf/0.1/>. @prefix solid: <http://www.w3.org/ns/solid/terms#>. @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>. <https://id.inrupt.com/partyboybcb> a foaf:Agent; <http://www.w3.org/ns/pim/space#storage> <https://storage.inrupt.com/96cf53c1-c48d-414e-84d5-b0001339c2d2>; solid:oidcIssuer <https://login.inrupt.com>; ``` Figure 2.3: Solid WebID 2.2.2 Solid Data Exchange Solid uses OIDC as its form of authentication and there exist OpenID Providers specifically for Solid that can issue access tokens necessary to inter- act with the resources within the Solid pods. These providers have their own configurations, documented under “./well-known/openid-configuration”. In these configurations are several endpoints necessary for performing authentication. The detailed flow of the authentication is shown in Figure 2.4 taken from the Solid OIDC documentation [15]. Servers and clients of the SOLID specification must use HTTP 1.1 and TLS connections. Once authenticated, the server and clients use HTTP requests to communicate. To create a resource, one must make a POST request to the storage location of the wanted file. Optionally, one can set a title for the file by setting a SLUG header. In addition, one can define the content type of the message, which may be a text note or a picture. To read a resource, one must make a GET request to the storage location of the wanted file. One can set the “Accept” type header to further specify what kind of resource is expected to be returned. To delete a resource, one must make a DELETE request to the storage location of the wanted file along with the specific file name that will be deleted. This must be done carefully to make sure that only standalone resources are deleted and not directories that contain multiple resources. To edit a resource, one edits the access control list of a resource. To do this, one must make a GET request to the storage location of the specific file and read the parameter of the link header “acl” for access control list. To do the actual editing of the access control list, one must make a PATCH request and change the RDF values in the list which is available under the location within the “acl” header [14]. 2.2.3 Solid Resources for Android There has been a lot of development with Solid applications in the JavaScript language. On the Solid Project website, there is an extensive list of JavaScript li- braries for tasks such as authentication, login, and session management, as well as JavaScript libraries for querying and manipulating RDF like resources on Solid servers. There are also web applications for purposes that range from note taking, movie tracking and recommendation sharing, chatting, and managing files on a Solid pod. Surprising, however, is the lack of tools or libraries specifically for Android mobile development, and there are also not many Solid compatible mobile applications [16]. This is concerning, since as previously discussed, mobile applications have access to not only all the data that a desktop application, but it can also track a person’s personal telemetry data as well. In addition, 70 percent of the global market share of smartphones use Android [17]. However, unlike JavaScript developers, Android developers have currently no help in development, so there is a need for a library specifically for Android developers. Thus, this project’s end goal is an Android library that can reduce time and effort when integrating into the Solid framework, which prioritizes user data autonomy. 3 Implementation and Architecture To reduce the amount of effort needed to develop applications using the Solid Protocol for mobile applications, there should exist an Android Solid library that could perform the basic CRUD operations (create, read, update, and delete) when interfacing with the resources within a Solid pod. There will be five steps to creating this library - creating the Solid pod, configuring authentication with the Solid pod, implementing the CRUD operations, demonstrating the library’s capabilities, and refactoring the necessary code into a single library. <table> <thead> <tr> <th>Provider</th> <th>Responsible for Domain Name and Terms</th> <th>Responsible for Hosting</th> <th>Hosting Location</th> <th>Solid Implementation</th> </tr> </thead> <tbody> <tr> <td>Inrupt Pod Spaces</td> <td>Inrupt, Inc.</td> <td>Amazon</td> <td>Germany</td> <td>ESS</td> </tr> <tr> <td>inrupt.net</td> <td>Inrupt, Inc.</td> <td>Amazon</td> <td>USA</td> <td>NSS</td> </tr> <tr> <td>solidcommunity.net</td> <td>Solid Project</td> <td>Digital Ocean</td> <td>UK</td> <td>NSS</td> </tr> <tr> <td>solidweb.org</td> <td>Solid Grassroots</td> <td>Hosteurope</td> <td>Germany</td> <td>NSS</td> </tr> <tr> <td>trinpod.us</td> <td>Graphmetrix, Inc.</td> <td>Amazon</td> <td>USA</td> <td>TrinPod</td> </tr> <tr> <td>use.id</td> <td>Digita</td> <td>DigitalOcean</td> <td>EU</td> <td>CSS</td> </tr> <tr> <td>solidweb.me</td> <td>Meisdata</td> <td>Hosteurope</td> <td>EU</td> <td>CSS</td> </tr> <tr> <td>Data Pod</td> <td>iGrant.io, Sweden</td> <td>RedPill Linpro, AWS, GCP</td> <td>EU</td> <td>NSS</td> </tr> </tbody> </table> Figure 3.1: List of Identity Providers The first task was to create the Solid pod to interface with. On the Solid website instructions to create a pod [18], there is a list of choices varying on location as seen in Figure 3.1. The identity provider Inrupt Pod Spaces, which uses the new Inrupt’s Enterprise Solid Server, was selected. There was also an option to self-host a pod, but this route was not chosen since the majority of users would not use a self-hosted pod. The second task was to configure authentication with a Solid OpenID provider. To even begin to interface with Solid Pods, authentication must be completed with the Solid Identity Provider. After authentication, the applications will receive access tokens with which the application can access the resources within the Solid pods. To perform this key OIDC authentication on a mobile application, a mobile library is needed that would be general enough for accommodating all kinds of Identity Providers and not just a single one, such as OneLogin or Okta applications. After some research on what options there were available to perform OIDC authentication for Android, the most popular and developed library called AppAuth, an “Android client SDK for communicating with OAuth 2.0 and OIDC providers” was chosen [19]. Thus, this was the library that was chosen that needed to be configured to use the Solid Identity Provider Inrupt. To set Inrupt as the Identity Provider, the mobile application code must be changed. The configuration file as shown in Figure 3.2 must contain the discovery URI. First, the redirect URI can be left the default value, which was the AppAuth Application. The client ID field was not necessary, since this application would not be registered with the Inrupt Identity Provider. The discovery URI is the link to the Identity Provider’s OpenID configuration, which is a JSON file with various endpoints such as the authorization endpoint, registration endpoint, and the scopes that were available for the specific identity provider. Every Identity Provider such as Okta or OneLogin has a configuration file. Then, the redirect scheme must also be defined to be the Solid mobile application as shown in Figure 3.3. The third task was to implement the CRUD functions within Android Solid Figure 3.2: OIDC Configuration File ```json { "client_id": "", "redirect_uri": "net.openid.appauthdemo:/oauth2redirect", "end_session_redirect_uri": "net.openid.appauthdemo:/oauth2redirect", "authorization_scope": "openid webid offline_access", "authorization_endpoint_uri": "", "token_endpoint_uri": "", "registration_endpoint_uri": "", "user_info_endpoint_uri": "", "https_required": true } ``` Figure 3.3: Android Manifest File ```xml <activity android:name="net.openid.appauth.RedirectUriReceiverActivity" android:exported="true"> <intent-filter> <action android:name="android.intent.action.VIEW"/> <category android:name="android.intent.category.DEFAULT"/> <category android:name="android.intent.category.BROWSABLE"/> <data android:scheme="https" android:host="appauth.demo-app.io" android:path="/oauth2redirect"/> </intent-filter> </activity> ``` Library. The Android Solid Library should enable users to create, read, and delete files in the Solid Pod, as well as edit the users who can access files. To access the resources within a Solid Pod, specific HTTP requests must be crafted by using the library HttpURLConnection. For all of these HTTP requests, the Authorization header was required with the Bearer access token. To create a resource, the HttpURLConnection method is “POST” and targets the future storage location of the file. The SLUG header can be defined with a specific value which would be the file name in the Solid pod, as well as the content type header, which may be a text note or a picture. In Solid Photos, it should be a JPG image. The user is able to take a picture with their camera and upload it to their Solid Pod with the push of a button, which listens for clicks and makes these HTTP requests. Upon a successful request, the response code should be 201, which means that “the request succeeded, and a new resource was created as a result” [20]. To read a resource, the HttpURLConnection method is “GET” and targets the storage location of the file. In SolidPhotos, the images are displayed when the application decodes the stream from the response of the request into a bitmap, which is decoded again into an image to display on the phone. The request is successful if the HTTP code 200 is returned. To delete a resource, the HttpURLConnection method is “DELETE” and targets the storage location of the file. The response code from the request should be 204, which means that the request was successful but “there is no content to send for this request” [20]. The edit function for access to the file is currently not implemented. Part of the reason is due to the fact that there is no native way to parse RDF in Android at this moment. Different ways to natively parse RDF on Android were explored, but none worked. The first option was Rio, an Eclipse library that could perform RDF parsing and queries. This library just did not work because its DatatypeFactory could not be properly instantiated. The second option was the Apache Jena library. However, this library was too large and overwhelmed the limited memory for mobile applications. Lastly, the jena-android library which was written 8 years ago and seems to be not updated anymore was tried. But there were still issues with the library being too large to fit on limited memory. In the end, it was concluded that there was no current way to natively parse RDF in Android and other solutions needed to be explored. The solution was then to send the received data from the WebID GET request to a separate web server on the UARK network which could instead process the RDF and send back storage links and a list of all the files that were in a pod. Of course, using a webserver is not completely ideal and the mobile application should be able to parse all the RDF natively to ensure that there is no data leakage between the mobile application and the webserver. The fourth task was to create a Solid compliant photo gallery mobile application. The Solid compliant photo gallery, or Solid Photos, should be able to let a user view their photos in their Solid Pod (read operation), delete photos (delete operation), upload photos from storage or access the camera to get a photo (create operation), and edit who can have access to the photo (edit operation). In addition, there were some functions that needed to be implemented for the Android application itself, such as a function to get the list of current files within the Solid pod, and a method to get the storage URL of a user based on their unique WebID. The last step was to separate the code for the CRUD operations into their own library. This step is also still a work in progress since not all of the CRUD operations are implemented. In conclusion, there are six different functions in the Solid Photos application that could later be moved to a separate Android Solid library. - Create resource – uploads a photo to a Solid pod through Solid Photos - Read resource – views a photo on a Solid pod through Solid Photos - Update resource – edits who can access photos in a Solid pod through Solid Photos - Delete resource – removes a photo from a Solid pod through Solid Photos - Get storage URL – returns the storage URL of a pod user - Get file list - returns to the list of files of a pod user 4 Results For the results, the capabilities of the current Solid Photos application, a Solid compatible photo gallery application, are demonstrated. Then, the amount of time needed for the POST and the DELETE requests will be tested and a statistical analysis will be performed. The application can perform the operations of viewing and deleting photos in their Solid pod. However, there are issues with uploading a photo to the Solid pod, so this is demonstrated through uploading text to their Solid pod. The edit function was never completed due to the roadblock of there not being a way to parse RDF and write SPARQL queries in Android. First, through using the AppAuth login feature configured with the Identity Provider Inrupt, logging in is possible. Figure 4.1 shows the login screen and Figure 4.2 shows a successful login attempt. Once logged in, users can view the starting screen. The “Update List” button displays list of all of the files that are currently in the Solid Pod as shown in Figure 4.3. Each of the “View” buttons are used to view each resource as shown in Figure 4.4. This application was meant to be a photo gallery application for viewing pictures, but of course there are a multitude of files that can be stored in the Solid Pod. To demonstrate the viewing function, the “bunny.jfif” file’s View Button was selected. The response of the request should be a bitmap that is shown in the ImageView, as shown in Figure 4.5. Each of the delete buttons will delete the file on the same line. Here is the file with the name “e5da94f5-c702-4e76-8c66-36d7b14e160c” as shown in Figure 4.6. Now the upload activity is demonstrated. It would have been ideal to have the functionality to upload a photo taken by the camera, but there were issues with uploading data that could be accepted by the Solid pod. The uploaded image was corrupted and could not be uploaded to the pod. Thus, instead of uploading images, the app uploads text files to demonstrate the create function as shown in Figure 4.7. After the file is changed in Figure 4.8, the new updated file list is shown in Figure 4.9. Next, several metrics were taken of the post and delete requests. To test the post requests, there were text files of varying sizes that were uploaded to the Solid pod. Generally, as the file sizes were increased, the average times to complete the requests also increased, and the varying differences between the times also increased. This makes sense as the file sizes increased and thus there is more time needed to send the file to the Solid pod. The delete requests were tested right after the post requests by deleting all the files that were created in the post requests. The numbers stay about the same because nothing is required to be sent back to the mobile application except the code 204 which shows that the delete request was successful. The first request’s time was always removed, because the first request takes a long time due to all the handshaking that goes on. The later requests are shorter and more consistent in time. After each request, the HTTP connection was disconnected, and the thread making the requests was put to sleep to ensure that the connections were not being reused and skewing the resulting times. These statistics were taken from a dataset of 100 trial runs of POST requests and DELETE requests. Figure 4.1: Inrupt Login Screen Figure 4.2: Inrupt Login Success Figure 4.3: Update File List Figure 4.4: View File Activity Figure 4.5: Image View Activity Are you sure you want to delete the resource? /96cf53c1-c48d-414e-84d5-b0001339c2d2/e5da94f5-c702-4e76-8c66-36d7b14e160c Figure 4.6: Delete Activity Figure 4.7: Create Activity Figure 4.8: Create New File Figure 4.9: Updated Files after Create Activity ### Table 4.1: Statistics for Post Requests <table> <thead> <tr> <th>File Sizes</th> <th>Avg Times</th> <th>Standard Dev</th> </tr> </thead> <tbody> <tr> <td>4KB</td> <td>293.84 ms</td> <td>36.72</td> </tr> <tr> <td>8KB</td> <td>306.9 ms</td> <td>35.13</td> </tr> <tr> <td>16KB</td> <td>419.91 ms</td> <td>71.72</td> </tr> <tr> <td>32KB</td> <td>575.62 ms</td> <td>41.27</td> </tr> <tr> <td>64KB</td> <td>791.64 ms</td> <td>92.51</td> </tr> <tr> <td>128KB</td> <td>1173.49 ms</td> <td>128.78</td> </tr> <tr> <td>256KB</td> <td>1747.91 ms</td> <td>151.21</td> </tr> <tr> <td>512KB</td> <td>2503.85 ms</td> <td>183.15</td> </tr> <tr> <td>1MB</td> <td>3417 ms</td> <td>258.53</td> </tr> </tbody> </table> ### Table 4.2: Statistics for Delete Requests <table> <thead> <tr> <th>File Sizes</th> <th>Avg Times</th> <th>Standard Dev</th> </tr> </thead> <tbody> <tr> <td>4KB</td> <td>218.18 ms</td> <td>45.66</td> </tr> <tr> <td>8KB</td> <td>204.49 ms</td> <td>15.84</td> </tr> <tr> <td>16KB</td> <td>217.21 ms</td> <td>34.93</td> </tr> <tr> <td>32KB</td> <td>214.99 ms</td> <td>19.06</td> </tr> <tr> <td>64KB</td> <td>221.22 ms</td> <td>25.53</td> </tr> <tr> <td>128KB</td> <td>212.55 ms</td> <td>18.36</td> </tr> <tr> <td>256KB</td> <td>216.84 ms</td> <td>24.21</td> </tr> <tr> <td>512KB</td> <td>212.51 ms</td> <td>37.06</td> </tr> <tr> <td>1MB</td> <td>222.18 ms</td> <td>30.92</td> </tr> </tbody> </table> 5 Discussion While encouraging the adoption of the Solid ideals for the modern web is a step in the right direction of placing ownership of data back into the user’s hands, there are still issues that remain to be resolved. Firstly, when the user gives a third-party access to their data, nothing is preventing the companies from making a copy of the data. If the user refuses to give away their data, then the company cannot process the data. This is a legitimate problem, but the principles of Solid will at least prevent all of the user’s data from being stored immediately with the third-party company. Fully homomorphic encryption could still be a solution to this problem to ensure total obscurity of the data between the user and the third-party company. Secondly, while users will be more aware of the data they give away to the companies, there will be a massive mental strain on the user if they have to give consent to every bit of information that applications request for. Further experimentation is needed to ensure that the users of the data being requested are fully aware of what data is needed but not overwhelmed. Thirdly, it is true that Solid is not a mainstream framework yet. There will have to be an influential technology company like Google or Amazon that will have to make the conscious decision to transition towards the Solid framework which prioritizes user data autonomy. Then other companies will have to either also respond with their own adoption of more user data autonomy friendly policies or present their own solutions. Or there will be a law like the General Data Protection Regulation (GDPR) that will force companies to comply. However, if these questions can be solved, honoring user data autonomy would be much easier for mobile applications, which are still privy to a large amount of sensitive personal data. Recently there was a Java Solid client library written by the Inrupt co- munity [21]. This shows that there is an interest in encouraging development with the Solid framework in multiple languages, and especially in a language that is of use to Android. However, there still exists no Android-specific libraries. 6 Conclusion 6.1 Summary User data privacy and autonomy are not prioritized enough in today’s technological systems. Data brokers are able to gather mass amounts of personal information to make digital profiles which can be used to make decisions that affect people in their day-to-day lives. One can adopt the Solid framework within all types of applications to ensure that the users have autonomy over their own data. While there currently does not exist any technological support for Android, the objective of this thesis was to start to develop an Android Solid library which will encourage the adoption of Solid and its ideals of user data autonomy. 6.2 Future Work Further software development is needed to natively parse RDF on Android. The lack of resources for parsing RDF on Android was a big obstacle to the total completion of the Android Solid library, which is surprising considering that RDF is the industry standard for representing data on the internet. Realistically, the mobile application should not have to ask a separate server to parse the RDF of HTTP responses like the current solution of the thesis work is now. The editing access for users to different resources in the Solid Pod, which is the most important function of the Solid pod, was not able to be implemented in time. This would be one of the highest priority tasks for future work on this mobile application and Android Library. Additionally, the Java Android library could be converted to Kotlin Multi-mobile Platform due to development being possible in both Android and iOS. Bibliography [12] Lab41, “Lab41/pyseal: This repository is a fork of microsoft research’s homomorphic encryption implementation, the simple encrypted arithmetic library (seal). this code wraps the seal build in a docker container and provides python api’s to the encryption library.” [Online]. Available: https://github.com/Lab41/PySEAL
{"Source-Url": "https://scholarworks.uark.edu/cgi/viewcontent.cgi?article=1116&context=csceuht", "len_cl100k_base": 8869, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 65623, "total-output-tokens": 11585, "length": "2e13", "weborganizer": {"__label__adult": 0.0005025863647460938, "__label__art_design": 0.0006213188171386719, "__label__crime_law": 0.000950336456298828, "__label__education_jobs": 0.004688262939453125, "__label__entertainment": 9.512901306152344e-05, "__label__fashion_beauty": 0.000263214111328125, "__label__finance_business": 0.000637054443359375, "__label__food_dining": 0.0003631114959716797, "__label__games": 0.0005803108215332031, "__label__hardware": 0.0016527175903320312, "__label__health": 0.0009303092956542968, "__label__history": 0.00042128562927246094, "__label__home_hobbies": 0.00012564659118652344, "__label__industrial": 0.0003457069396972656, "__label__literature": 0.0004253387451171875, "__label__politics": 0.00047898292541503906, "__label__religion": 0.00039005279541015625, "__label__science_tech": 0.109375, "__label__social_life": 0.00024890899658203125, "__label__software": 0.019256591796875, "__label__software_dev": 0.85693359375, "__label__sports_fitness": 0.00027298927307128906, "__label__transportation": 0.0005230903625488281, "__label__travel": 0.00015234947204589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44112, 0.04211]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44112, 0.27043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44112, 0.91304]], "google_gemma-3-12b-it_contains_pii": [[0, 883, false], [883, 964, null], [964, 1243, null], [1243, 2346, null], [2346, 2613, null], [2613, 3190, null], [3190, 4892, null], [4892, 6170, null], [6170, 6358, null], [6358, 8362, null], [8362, 10418, null], [10418, 12183, null], [12183, 13195, null], [13195, 15502, null], [15502, 17229, null], [17229, 18633, null], [18633, 18811, null], [18811, 20690, null], [20690, 21814, null], [21814, 23739, null], [23739, 25986, null], [25986, 26988, null], [26988, 29182, null], [29182, 31339, null], [31339, 31400, null], [31400, 33170, null], [33170, 34755, null], [34755, 34787, null], [34787, 34820, null], [34820, 34849, null], [34849, 34880, null], [34880, 34912, null], [34912, 35063, null], [35063, 35091, null], [35091, 35119, null], [35119, 35167, null], [35167, 36184, null], [36184, 38117, null], [38117, 38357, null], [38357, 39927, null], [39927, 41984, null], [41984, 43974, null], [43974, 44112, null]], "google_gemma-3-12b-it_is_public_document": [[0, 883, true], [883, 964, null], [964, 1243, null], [1243, 2346, null], [2346, 2613, null], [2613, 3190, null], [3190, 4892, null], [4892, 6170, null], [6170, 6358, null], [6358, 8362, null], [8362, 10418, null], [10418, 12183, null], [12183, 13195, null], [13195, 15502, null], [15502, 17229, null], [17229, 18633, null], [18633, 18811, null], [18811, 20690, null], [20690, 21814, null], [21814, 23739, null], [23739, 25986, null], [25986, 26988, null], [26988, 29182, null], [29182, 31339, null], [31339, 31400, null], [31400, 33170, null], [33170, 34755, null], [34755, 34787, null], [34787, 34820, null], [34820, 34849, null], [34849, 34880, null], [34880, 34912, null], [34912, 35063, null], [35063, 35091, null], [35091, 35119, null], [35119, 35167, null], [35167, 36184, null], [36184, 38117, null], [38117, 38357, null], [38357, 39927, null], [39927, 41984, null], [41984, 43974, null], [43974, 44112, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44112, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44112, null]], "pdf_page_numbers": [[0, 883, 1], [883, 964, 2], [964, 1243, 3], [1243, 2346, 4], [2346, 2613, 5], [2613, 3190, 6], [3190, 4892, 7], [4892, 6170, 8], [6170, 6358, 9], [6358, 8362, 10], [8362, 10418, 11], [10418, 12183, 12], [12183, 13195, 13], [13195, 15502, 14], [15502, 17229, 15], [17229, 18633, 16], [18633, 18811, 17], [18811, 20690, 18], [20690, 21814, 19], [21814, 23739, 20], [23739, 25986, 21], [25986, 26988, 22], [26988, 29182, 23], [29182, 31339, 24], [31339, 31400, 25], [31400, 33170, 26], [33170, 34755, 27], [34755, 34787, 28], [34787, 34820, 29], [34820, 34849, 30], [34849, 34880, 31], [34880, 34912, 32], [34912, 35063, 33], [35063, 35091, 34], [35091, 35119, 35], [35119, 35167, 36], [35167, 36184, 37], [36184, 38117, 38], [38117, 38357, 39], [38357, 39927, 40], [39927, 41984, 41], [41984, 43974, 42], [43974, 44112, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44112, 0.16611]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
5b88f63fbd9d9328765badf080abdd6ffba6daff
Adaptive Page Replacement Based on Memory Reference Behavior Gideon Glass Pei Cao Technical Report #1338 February 1997 Adaptive Page Replacement Based on Memory Reference Behavior Gideon Glass and Pei Cao Computer Sciences Department University of Wisconsin--Madison {gid,cao}@cs.wisc.edu Abstract As disk performance continues to lag behind that of memory systems and processors, virtual memory management becomes increasingly important for overall system performance. In this paper we study the page reference behavior of a collection of memory-intensive applications, and propose a new virtual memory page replacement algorithm, SEQ. SEQ detects long sequences of page faults and applies most-recently-used replacement to those sequences. Simulations show that for a large class of applications, SEQ performs close to the optimal replacement algorithm, and significantly better than Least-Recently-Used (LRU). In addition, SEQ performs similarly to LRU for applications that do not exhibit sequential faulting. 1 Introduction As the performance gap between memory systems and disks increases, the impact of memory management on system performance increases. Although buying more memory would always alleviate the poor performance of current virtual memory (VM) systems, operating system designers should attempt to improve VM design and policies so that users receive the best attainable performance, regardless of system configuration and budget. In this study we collected sixteen memory-intensive applications and studied their page reference behavior. Seven applications are from the SPEC95 suite; the rest are "big-memory" applications including integer-intensive programs (e.g., databases) and scientific computations. We found that the applications have very different page reference patterns: some are truly memory intensive, referencing many pages in short time intervals, while others have clear reference patterns that can be exploited for better replacement decisions. We simulated the Least-Recently Used (LRU) page replacement algorithm and the optimal offline algorithm (Beaulay's OPT algorithm [2]) for these applications under varying main memory sizes. For the applications that has no visible, large-scale access patterns, both LRU and OPT show gradual, continuous reduction in page fault rate as memory size increases. LRU appears to be a good replacement policy for such programs. For applications that have clear access patterns, however, LRU often performs poorly: it frequently exhibits plateau behavior, where increasing memory sizes does not reduce fault rate until the whole program fits into memory. For these programs OPT obtains at least linear reduction in fault rate as memory size increases. Based on LRU's observed poor behavior, we propose a new replacement algorithm, SEQ. SEQ normally performs LRU replacement; in addition, it monitors page faults as they occur, detecting long sequences of faults to contiguous virtual addresses. When such sequences are found, SEQ performs a pseudo most-recently-used (MRU) replacement on the sequences, attempting to imitate what OPT would do. SEQ often corrects the poor performance (plateau behavior) of LRU for applications that have sequential behavior, yet it performs the same as LRU for other types of applications. We also conducted a preliminary study of two global page replacement algorithms: global LRU replacement, and SEQ extended to be a global replacement algorithm. We found that SEQ performs similar to or better than global LRU on mixes of various application types. Our results suggest that SEQ may be a good algorithm suitable for implementation in a real OS kernel VM system. 2 Applications and Traces The applications we studied are described in Table 1. Shown for each program is the number of instructions executed by the traced program and the amount of total memory used by the program. (Other columns in the table will be described further below.) 2.1 Trace Methodology We collected memory reference traces using Shade [8], an instruction-level trace generator for the SPARC architecture. All programs ran on machines running the Solaris 2.4 operating system. Because of the length of our traces, recording all memory references individually would result in unmanageably large trace files. Instead, we record "IN" and "OUT" records. We divide program instruction time into fixed-length intervals (usually 1,000,000 instructions). At the end of every interval, for every page that was referenced in the current interval but was not referenced in the previous interval, an IN record is generated and timestamped with the actual time (in terms of instructions executed) of the first reference to that page. Similarly, for every page that was accessed in the previous interval but was not... <table> <thead> <tr> <th>Program</th> <th>Description</th> <th>Length (millions of instructions)</th> <th>Memory used (KB)</th> <th>Executable size (KB)</th> <th>Min. simulatable memory size (KB)</th> </tr> </thead> <tbody> <tr> <td>applu</td> <td>Solve 5 coupled parabolic/elliptic PDEs</td> <td>1068</td> <td>14524</td> <td>136</td> <td>2432</td> </tr> <tr> <td>blizzard</td> <td>Binary rewriting tool for software DSM</td> <td>2122</td> <td>15632</td> <td>1153</td> <td>5332</td> </tr> <tr> <td>coral*</td> <td>Deductive database evaluating query</td> <td>4327</td> <td>20284</td> <td>940</td> <td>7084</td> </tr> <tr> <td>es*</td> <td>Microstructure electrostatics</td> <td>71003</td> <td>104488</td> <td>56</td> <td>696</td> </tr> <tr> <td>fgm*</td> <td>Finite growth model</td> <td>35210</td> <td>121508</td> <td>112</td> <td>10052</td> </tr> <tr> <td>gcc</td> <td>Optimizing C compiler</td> <td>1371</td> <td>3936</td> <td>1599</td> <td>1900</td> </tr> <tr> <td>gnuplot</td> <td>PostScript graph generation</td> <td>4940</td> <td>62516</td> <td>602</td> <td>1552</td> </tr> <tr> <td>jpeg</td> <td>Image conversion into JPEG format</td> <td>42651</td> <td>8260</td> <td>152</td> <td>1112</td> </tr> <tr> <td>m88ksim*</td> <td>Microprocessor cycle-level simulator</td> <td>10020</td> <td>19352</td> <td>165</td> <td>1964</td> </tr> <tr> <td>murphi</td> <td>Protocol verifier</td> <td>1019</td> <td>9380</td> <td>238</td> <td>2132</td> </tr> <tr> <td>perl*</td> <td>Interpreted scripting language</td> <td>18980</td> <td>39344</td> <td>569</td> <td>9568</td> </tr> <tr> <td>swim</td> <td>Shallow water simulation</td> <td>438</td> <td>15016</td> <td>56</td> <td>6932</td> </tr> <tr> <td>trityd</td> <td>Tridiagonal matrix calculation</td> <td>377</td> <td>69688</td> <td>26</td> <td>2444</td> </tr> <tr> <td>turb3d</td> <td>Turbulence simulation</td> <td>17989</td> <td>26052</td> <td>71</td> <td>7720</td> </tr> <tr> <td>vortex</td> <td>Main memory database</td> <td>2507</td> <td>9676</td> <td>600</td> <td>3024</td> </tr> <tr> <td>wave5</td> <td>Plasma simulation</td> <td>3774</td> <td>28700</td> <td>511</td> <td>3652</td> </tr> </tbody> </table> Table 1: Benchmark programs measured, with execution duration and memory address space size. * Indicates runs which were terminated before they completed. Also shown are minimum simulatable memory sizes (discussed in section 2.1) and the size of the program binary. accessed in the current interval, an OUT record is generated with the timestamp of the instruction making the last reference to the page. IN and OUT records in a trace are written out sorted by their timestamps. We used a uniform page size of 4KB throughout this study. The IN and OUT records associated with a page mark the beginning and end of a period when the page is referenced. The page is accessed at least once during each interval in this period; exactly how many times and exactly when each reference occurs is unknown. However, a page is definitely not accessed in the time between an OUT record until the next IN record for that page. This trace format not only is compact but also allows accurate simulation of several replacement algorithms for sufficiently large memory sizes. At any point in a trace, define pages that are between an IN record and an OUT record as being "ACTIVE", and the pages that are between an OUT record and an IN record as "IDLE". Then the OPT algorithm, which replaces the page that is referenced furthest in the future, can be simulated by replacing the IDLE page whose next IN record is both furthest in the future and at least two intervals ahead of the current interval. Such a page is indeed the furthest referenced page because any ACTIVE page will be accessed again either in the current interval or in the next interval. By similar reasoning, LRU can be simulated by replacing the IDLE page whose previous OUT record is both the earliest among all IDLE pages, and whose previous OUT record is either two intervals before the current interval, or is before the IN records of all ACTIVE pages. These constraints ensure that the page is indeed the least-recently-used page (since any ACTIVE page must have been accessed in the current interval or in the last interval). A limitation of our method is that it can only simulate memory sizes above a certain threshold. If the memory size is too small, the simulation will not be able to find an IDLE page satisfying the above criteria. The minimum simulatable memory sizes for each application are listed in Table 1. (For SEQ we used the same minimum as LRU since SEQ defaults to LRU replacement.) ### 2.2 Application Page Reference Behavior We can plot space-time graphs of references from the traces described above. For each execution interval (a point on the x axis) we plot a point for each page referenced in that interval. The y axis values are relative page locations within the program's address space (since the application's address space is usually sparse and contains many unused regions, we leave out the address space holes and number the used pages from low addresses to high addresses on the y axis). On the following pages are space-time plots for each of our applications. Observing the space-time graphs, we found that the applications fall roughly into three categories. The first, which includes coral, murphi, m88ksim and vortex, are truly memory intensive—large numbers of pages are accessed during each execution interval. There are no clearly visible patterns within the vast dark areas. The second category, which includes blizzard, gcc, and perl, are also memory intensive, but have patterns at a small scale (for example, in gcc, the traversal of pages in the 0.5MB–2.25MB range follows a certain pattern). (These kind of small-scale patterns might be exploited for techniques such as prefetching, but we have not investigated prefetching in this paper.) The third application category, consisting of the rest of the applications, show clearly-exploitable, large-scale reference patterns. Ranges of address space are traversed in the same pattern repeatedly. The applications seem to be array-based, though some of them are written in C (fgm and gnuplot). Some programs (jpeg, applu, and trygtsl) traverse ranges of memory in one direction and then change direction, but most programs simply go in one direction. The number of sequentially-traversed regions also varies, with swim doing about sixteen and other programs (es, gnuplot) covering only one large region. These classes of behavior remind us of the following comment by Rob Pike: “The following data structures are a complete list for almost all practical programs: array, linked list, hash table, binary tree.” [23] The statement clearly has some truth to it: most applications exhibiting regular reference patterns are array-based; vertex, m88ksim, murphi, coral, and perl are apparently either making heavy use of hash tables or are traversing tree structures; gcc and perl (to some extent) seem to use linked lists heavily. From the virtual memory system’s point of view, array-based applications would be the easiest to handle, while hash tables are the hardest. One interesting observation from the space-time graphs is that for the programs we investigated, any given program does not change its memory behavior radically—there are not many distinct phases of behavior. (Some of the programs repeat various patterns—turb3d for example—but there is not a clear start-to-finish progression of different phases.) Program behavior generally varies much more between different programs than it does between any two phases of a single program. 2.3 Performance of LRU and OPT Figures 1 and 2 show page faults per one million instructions executed for each application as its memory spans the range from the minimum simulatable size to the total number of pages the application uses. The three curves in the graph are LRU, OPT, and the new algorithm SEQ that we will describe in the next section. We do not include startup faults in the figures, because most of these faults are due to initialization of processes’ address space, and are usually serviced by zero-filling a page, not by invoking a disk I/O. (The number of pages that must be demand-paged from disk can be estimated by dividing the “program size” column in Table 1 by the 4KB page size.) The results show that for the first and second categories of applications, which are memory intensive and do not have strong patterns, LRU performs similarly to OPT, though LRU suffers about twice as many page faults on average. For these application classes, the fault rate under LRU drops continuously when more memory is available; the rate of improvement is similar to that under OPT. The improvement appears to be super-linear for memory sizes less than half of the total memory needed by the program (i.e. doubling the amount of memory more than halves the number of page faults), and the improvement slows down after that point. The situation is completely different for the applications in the third category (programs with highly regular sequential access patterns). LRU performs much poorer than OPT, generating up to five to ten times more page faults. LRU frequently gives no improvement till memory size reaches a certain threshold, and results in “staircase” graphs. This gives the appearance that the applications have certain working sets that, once in memory, will reduce the fault rate significantly. In fact, OPT is always able to reduce the fault rate continuously, and LRU simply fails to reduce the fault rate until it reaches certain memory sizes. The problem is that these applications (gnuplot, for example) are looping over large address space ranges; LRU replaces pages starting at the beginning of the address range (since those are oldest), replacing pages a constant distance behind the location where the program is accessing memory. When the program begins another iteration at the bottom of the range, LRU pages out the top. All pages in the range must be paged in on every iteration, resulting in the worst possible performance. This “LRU flooding” phenomenon is the primary motivation for our SEQ algorithm, described in Section 4. 3 Inter-fault Times In addition to observing fault rate for varying memory sizes, we also observed mean inter-fault times for varying memory sizes. Although mean inter-fault time is simply the inverse of the fault rate, we found it instructive to examine both types of graphs. The mean inter-fault time graphs help illuminate the right end of the fault rate curves, where the curves approach zero. Figures 3 and 4 graph mean inter-fault times for varying memory sizes for the three replacement policies OPT, LRU, and SEQ. The y-axis is scaled from zero to ten million instructions between faults. (On modern computer systems, a program taking page faults less frequently than one per ten million instructions will suffer very little slowdown from paging.) Denning illustrates Working-Set page replacement and its rationale by means of mean inter-fault time plots for a hypothetical program [10]. His plots contain some “knee” (a global maximum of f(x)/x, i.e., mean inter-fault time divided by memory size). Our plots show little evidence of knees. The programs whose curves do contain knees perform poorly under LRU (e.g. fgm and turb3d). Since the OPT curves all slope gracefully upwards, as one would expect from the fault-rate OPT curves, we conclude that knee behavior in inter-fault-time curves are more likely LRU replacement relics than signs of inherent program memory demand. 4 SEQ Replacement Algorithm The intuition behind the SEQ replacement algorithm is to detect long sequences of page faults and apply MRU replacement to such sequences. The goal is to avoid LRU flooding, which occurs when a program accesses a large address space range sequentially. If a program accesses an address range once, LRU would page out useless pages that would be accessed again; if the program accesses the address range multiple times and the range is larger than physical memory, Figure 1: Performance of OPT, SEQ and LRU. For es and gnuplot, the SEQ curve almost overlaps the OPT curve. For coral and gcc, the SEQ curve overlaps the LRU curve. Figure 2: Performance of OPT, SEQ and LRU. For murphi, the SEQ curve overlaps the LRU curve. For vortex, the SEQ curve mostly overlaps the LRU curve. Figure 3: Mean inter-fault times for OPT, SEQ and LRU. For coral and gcc, SEQ and LRU curves overlap. The apparent "knee" in the fgm curve only appears for LRU replacement, under which fgm performs poorly. Figure 4: Mean interfault times for OPT, SEQ and LRU. For swim, the SEQ curve appears to terminate early due to the next plot point being above 18 on the y-axis. Note the knees in the LRU curve for turb3d; for SEQ and OPT the knees are not present. LRU would page out the pages in the order in which they are accessed and thus perform poorly, as described above. If no sequences are detected, SEQ performs LRU replacement. 4.1 Design There are four main components in SEQ’s design: 1. What is a “sequence”? A sequence is a series of page faults to consecutive virtual addresses, growing in one direction (increasing addresses or decreasing addresses) with no other faults to pages in the middle of the series. (We refer to most recently-added page—the page at the end of the sequence in the direction of growth—as the head of the sequence.) 2. When memory is low and a page much bepaged out, which sequence is chosen to replace a page from? SEQ chooses only sequences of length greater than $L$ (currently 20 pages); it examines the time of the $N$th (currently $N = 5$) most recent fault in each sequence, and chooses the one whose fault is most recent. 3. Which page from the chosen sequence is replaced? SEQ chooses the first in-memory page that is $M$ (currently 20) or more pages from the head of the sequence. 4. What happens to a sequence if a page fault occurs in the middle of the address range of the sequence? SEQ splits the sequence into two sequences, one ranging from the beginning of the sequence to the page immediately preceding the faulted page, and the other consisting of the faulted page alone. Choices of values for $L$, $N$ and $M$ is discussed in Section 4.2. SEQ detects replaceable sequences by observing page faults (not page references) and associates them based on adjacent virtual addresses. SEQ maintains a list of sequences, recording (for each sequence) the tuple $<\text{low.end, high.end, dir}>$. The tuple indicates a sequence ranging from virtual address $\text{low.end}$ to virtual address $\text{high.end}$, faulting (as time increases) in the direction $\text{dir}$ (which is either up or down). When a page fault on page $pf$ occurs, SEQ examines sequences adjacent to $pf$. If the new page fault extends the sequence (i.e. $pf = \text{high.end} + 1$ and $\text{dir} = \text{up}$, or $pf = \text{low.end} - 1$ and $\text{dir} = \text{down}$), the sequence’s $\text{low.end}$ or $\text{high.end}$ is changed to include the current fault. If $pf$ falls in the middle of the sequence (i.e. $\text{low.end} \leq pf \leq \text{high.end}$), then the sequence is split into two, one being $<\text{low.end}, pf - 1, \text{dir}>$ if $\text{dir} = \text{up}$ or $<pf + 1, \text{high.end}, \text{dir}>$ if $\text{dir} = \text{down}$, and the other consisting of the new fault only (i.e. $<pf, pf, \text{nil}>$), nil meaning the direction cannot be determined for now). If $pf$ does not extend any existing sequence nor overlap any sequence, then a new sequence is built, $<pf, pf, \text{nil}>$. If $pf$ can extend two existing sequences, SEQ deletes the older of the sequences (the one whose last fault is earlier) and extends the newer sequence. In addition, if extending a sequence would lead to overlapping with another sequence, then the sequence that would be overlapped is deleted. SEQ limits the number of sequences that it tracks. (Currently the limit is 200). When adding a new sequence would exceed the limit, SEQ first deletes the oldest sequence (by time of the most recent fault to that sequence) of length less than $L$. (If all sequences are longer than $L$, SEQ would delete the oldest sequence with length $\leq 2 \times L$, etc.) When a replacement page must be chosen, SEQ examines all sequences of length $\geq L$, and tries to pick the sequence that faulted most recently. The heuristic we use is to sort these sequences based on the faulting time of their $N$th most recent fault, and choose the one with the more recent fault time. Currently $N = 5$. If no sequence with length $\geq L$ exists, the default LRU replacement is used. Once a sequence is picked, SEQ is constrained not to replaces pages closer than $M$ pages away from the sequence head. Starting from the $M$th page away from the head, SEQ skips any on-disk pages, choosing the first in-memory page it finds. If it cannot find an in-memory page in this sequence, SEQ examines the next sequence as determined above. For efficiency, SEQ keeps track of the range of on-disk pages in each sequence, so that the search for a replacement page can skip many on-disk pages in one step. To illustrate how SEQ works in practice we’ll consider a simple example. The example corresponds to the simplest case in which SEQ will be effective; the behavior of our benchmark es is similar to the behavior in the example, and graphs of es’s faults, and of SEQ’s chosen replacement pages, will follow. Suppose a program makes several sequential passes over a single, large memory region (larger than memory size), going from the low end of the region to the high end of the region as time passes. When the memory region is first accessed, each page will be faulted into the address space in turn. A single, large sequence will be created. Midway through the first pass memory will have filled up (the lower portion of the address range occupies memory). Because there is a sequence from which to replace pages, SEQ will page out the newly-faulted pages behind the head of the sequence, which continues to grow upwards as the program progresses. The result after the program’s first pass over the address range is that the bottom half of the sequence remains in memory and the top portion has been paged out. On the next iteration, the region’s bottom half will be in memory and hence no faults occur for those pages. However, as soon as the program reaches the point in space where memory filled up on the first iteration (and where SEQ started replacing pages), faulting will again commence. The very first fault will have the following effect: since the fault is somewhere in the middle of the single, long sequence that existed up to this time, that fault will split the sequence. The bottom half of the old sequence will remain, and a new sequence beginning at the first faulted page will be created. As the program continues upwards and more pages fault in, the newer sequence will grow, extending upwards with each new faulted page. In a short time the new sequence will have grown to length $L$ and, because its $N$th fault is more recent than the $N$th fault of the original (bottom half) sequence, SEQ will start to replace pages from the new (upper) sequence. Again the bottom portion of the address range remains in memory and the upper part is paged out. On successive iterations, no faults will occur until the program references memory midway through the address range; as faults do occur, a sequence will be built, and SEQ will replace pages from the upper memory region once again. 2 Figures 5 and 6 show, respectively, the memory locations of page faults taken by ES and the pages chosen by SEQ as replacements. The graphs are for runs simulating 50MB of memory, about half of ES’s total demand. Recall from Section 2 that es’s behavior consists of essentially an iteration over a single large memory region. Observe how the SEQ replacements closely mirror the faults taken by ES. As a more complex example, Figures 7 and 8 show the faults and replacements for SEQ operating on applu (10MB simulated memory size). From applu’s space-time graph we observe that it iterates over four large areas, first accessing addresses in increasing order and then accessing them in decreasing order. The pattern of sequences that are made can be observed from Figure 8. The dashed lines in the figures are due to the fact that applu does not iterate consistently through its address space. It often touches approximately 30-50 consecutive pages and then skips a few pages, leading to a fairly large number of medium-size sequences to feed SEQ. When no sequences of suitable length (longer than \( L = 20 \) here) are found, LRU replacement is done; LRU accounted for about 70% of page-outs for this run of applu. The combined result of LRU and SEQ replacement on applu is that the upper and lower ends of the four large memory regions remain in memory and the middle regions are the source of replacement pages. ### 4.2 Simulation Results Since our traces contain only IN and OUT records, we cannot simulate SEQ accurately under all circumstances. Instead, we conduct a slightly conservative simulation. That is, if a chosen-for-replacement page is IDLE (i.e. it is not accessed until its next IN record), the page is simply replaced; if the page is ACTIVE (i.e. it is between an IN record and an OUT record, which means it is accessed actively during this interval), we replace the page and then immediately simulate a fault on the page to bring it back into memory. This results in a simulation that slightly underestimates the actual performance of SEQ, because in reality the page fault would occur sometime later in the current or the next interval. Simulation results are shown in Figures 1 and 2. Clearly, SEQ performs significantly better than LRU, and quite close to optimal, for the applications with clear access patterns (for example, gnuplot and turb3d). For other applications, SEQ’s performance is quite similar to LRU. For only one program (perl) does SEQ perform worse than LRU. (We are investigating why SEQ performs poorly for perl.) We have varied the three SEQ parameters \((L, M, and N)\) and observed resultant performance changes. Intuitively, the larger the value of \( L \), the more conservative the algorithm will be, because it is less likely that a run of faults will be long enough to be considered a sequence. Reducing \( L \) has the opposite effect. Similarly, the parameter \( M \) is set to guard against the case when pages in a sequence are re-accessed in a short time period. If the pages in the sequence are accessed only once, then \( M \) should be set to 1; however, if there is reuse of pages near the head of the sequence, then \( M \) should be larger to avoid replacing in-use pages. We experimented with three different settings of \( L \) and \( M \): \((L = 20, M = 20)\), which are the defaults, \((L = 50, M = 20)\), and \((L = 50, M = 50)\), and found that SEQ’s performance is unaffected for most of the applications. The three applications that show visible differences are applu, perl, and swim. Figures 9 and 10 shows their fault curves under the three parameter settings. (These figures show all programs for which any noticeable change occurred in SEQ performance when any parameter(s) changed. SEQ performance was unchanged for coral, gcc, murphi—where it reflected LRU performance—and for es and gnuplot, which both had essentially the same near-OPT performance as before for all parameter combinations.) For applu, since it has many short sequences that are disqualified for replacement when \( L = 50 \), SEQ at \( L = 50 \) essentially performs LRU replacement most of the time. Swim also has many small to medium length sequences, and SEQ at \( M = 50 \) appears to interact poorly with swim’s behavior at small memory sizes. For the rest of the applications, SEQ’s performance is essentially unaffected by the parameter changes. The parameter \( N \) affects the choice of sequences in situations when sequences grow at varying rates: as \( N \) increases, so does the likelihood that SEQ will choose the sequence that grows fastest. We did not choose \( N = 1 \) because we want to avoid sequences that grow at sporadic rates. Since the space consumed by SEQ is directly proportional to \( N \) (it must store the times at which the last \( N \) faults occurred), small \( N \) is desirable. We varied \( N \) from 5 to 20, and found only negligible differences in SEQ’s performance; varying \( N \) from 5 to 20 has virtually no effect on SEQ’s performance. Thus, we set \( N = 5 \). To reduce SEQ’s modest runtime space requirements further, we experimented with setting \( X = 50 \). Compared to the default \( X = 200 \), performance was unchanged for all but two programs, applu and fgm (where performance changed only slightly). Graphs for \((L = 20, M = 20, X = 50)\) also appear in Figure 9. Applu performance degraded slightly in that SEQ did not drop below LRU until a larger memory size was used. The change in SEQ’s FGM performance were very minor, just a slight rise at the very high end of the memory size range. We conclude tentatively from this that the number of sequences maintained per program by a real implementation can likely be lowered well under 200 if Figure 5: Time and location for page faults taken by ES (50MB main memory) under SEQ (using default parameters). Initial faults (almost all being zero-fill) are not shown. Figure 6: Time and location of page replacement decisions made by SEQ for ES. (LRU decisions are not shown—for ES, in fact, every time a page-out was necessary, SEQ found a page in a sequence, so no pages were replaced by the fallback LRU policy.) 50MB main memory was simulated under default SEQ parameters. Figure 7: Time and location for page faults taken by applu (10MB main memory) under SEQ (using default parameters). Figure 8: Time and location of page replacement decisions made by SEQ for applu. 10MB main memory was simulated under default SEQ parameters. For this memory size, SEQ made about 15,000 page-outs from sequence pages; default LRU replacement (when in-sequence pages could not be found) accounted for about 35,000 page-outs. Note the dashed curves, which are due to the fact that applu does not access its memory completely sequentially. Figure 9: Performance of SEQ under varying parameters. For applu, the curve for SEQ:L=50:M=50 completely overlaps the LRU curve, and SEQ:L=50:M=20 overlaps LRU most of the time. For fgm, SEQ:L=50:M=50 performs worse than other SEQ settings. For swim, SEQ:L=50:M=50 performs more poorly than other SEQ settings. For other programs, differences in SEQ performance are negligible. SEQ's runtime resource consumption becomes a problem. To summarize, we found that the performance of the SEQ algorithm is fairly insensitive to the parameter values, and our current settings appear appropriate, though we plan continued testing in this regard. In our current implementation, SEQ takes roughly 10K bytes to keep track of 200 sequences (each taking roughly 48 bytes). Depending on applications, SEQ also takes slightly more CPU time than LRU for each replacement. We are still working on reducing the SEQ overhead. 5 SEQ as a Global Replacement Algorithm So far our discussion has focused on the performance of various replacement policies for single applications. In real systems, multiple processes run at the same time and compete for memory. There are two general approaches to page replacement in multi-process environments [12]. One approach involves a memory allocation policy that allocates memory to different processes, and a page replacement policy that chooses replacements among each process’ pages when processes exceed their memory allotments. Another approach uses a “global” replacement algorithm, where a replacement page is chosen regardless of which process it belongs to. For example, global LRU replaces the page whose last reference was earliest among all memory pages. Currently, most time-sharing operating systems use some approximation of global LRU replacement. SEQ can be extended fairly easily to function as a global replacement algorithm. The only modification necessary is that the sequences must be grouped explicitly on a per-process basis, i.e. only page faults with the same process ID are associated for sequence detection. An obvious question is whether global SEQ would perform well in a time-sharing multi-process environment. To provide a preliminary answer to this question, we constructed a very simplified simulator of a multi-process system that captures the dynamic interleaving of process execution. We use a simple round-robin time slicing policy (simulating execution of each program for a certain length of time) and a time delay to model the service time for a page fault to disk. We then compared the performance of global LRU and global SEQ under concurrent executions of the applications. Our simulator reads multiple application traces, taking a record each time from the trace corresponding to the program that is currently executing. We schedule processes according to round-robin time-sliced scheduling with context switch at page faults. That is, each trace (process) is run for a quantum, and when the quantum expires, the scheduler puts the trace on the wait queue and picks a different trace (process). When a page fault happens, the current process is suspended for the duration of the service time of the page fault, and the scheduler picks another process to run. The two parameters, quantum time and page fault service time, are determined by a simple estimate of CPU speed—in our experiments the quantum is 1 million instructions (corresponding to 10ms on a machine capable of executing our programs at the uniform rate of 100 MIPS). Page fault service time is a uniform 400,000 instructions (4ms on the same 100MIPS machine). This is obviously a simplistic model, but it suffices for the purpose of creating a reasonable interleaving of multiple program traces. We picked four combinations, each of two applications, and one combination of three applications. The combinations are chosen to have a variety of mixes of application behavior and relative memory needs. They are: es+fgm, gcc+vortex, swim+trygts1, vortex+gnuplot, and coral+wave5+trygts1. For each combination, we measure the fault rate for the concurrent execution of the applications, under both global LRU and global SEQ, for a range of memory sizes. Again, since most of the initial faults are zero-filled pages rather than disk-read pages, we do not include them in the figures. The results are shown in Figure 11. The results show that in simple multi-process environment, global SEQ tends to outperform global LRU when sequential applications are run, and it performs similarly to global LRU when no sequential application is run. For example, global SEQ’s improvements over LRU in the cases of vortex+gnuplot and coral+wave5+trygts1 are similar. to those in gnuplot and wave5, and global SEQ performs similarly to global LRU in gcc+vetex. Thus, our preliminary simulation results show that SEQ is also a promising algorithm for global replacement. 6 Related Work Operating systems researchers have investigated the memory management problem for over thirty years, originally to determine if automatic management of memory (i.e. virtual memory) could perform as well as programmer-controlled physical memory allocation. Belady's paper in 1966 [2] introduced the optimal offline replacement algorithm (the OPT algorithm). A good survey on early research results on paging policies can be found in [12]. There have also been many studies on program behavior modelling and optimal online algorithms for each model. The models include independent reference [1], LRU stack [25], working set [9], access graphs [4], and the Markov model [16]. For each of these models, optimal online algorithms are found [12, 14, 16]. The SEQ algorithm is similar to the access-graph algorithms [4] in that it tries to take advantage of patterns found in reference streams. However, most theoretical studies on access-graph algorithms assume that the graph is known ahead of time, rather than being constructed at run-time. A recent study [11] investigated constructing the graph at run-time; however the study only looked at references to program text, not data. Also, the algorithm proposed in [11] is more complex and more expensive than SEQ. Although most early experimental studies focused on efficient approximation of LRU page replacement [3, 2, 7, 19], one scheme, the Atlas Loop Detector, investigated loop detection and MRU replacement on scientific programs [17]. SEQ differs from the loop detector in that it tries hard to work well on applications where LRU is appropriate. The Atlas scheme apparently performed poorly for non-scientific programs [9]. Recent research projects on application-controlled kernels show the potential of application-specific replacement policies [27, 13, 20, 18]. These studies focus on mechanisms by which applications inform the kernel about what pages would be good candidates for replacements. Our SEQ algorithm is basically the antithesis of such schemes. It will be interesting to see over time which philosophy prevails. Our study shows that run-time automatic sequence detection by the kernel may be a promising way to increase performance, at essentially no cost to the programmer. Recently there have been a number of studies of applications' memory reference behavior in the context of cache management. One study regarding processor pin bandwidth requirements [5] confirmed that there is a significant difference in cache miss ratios under LRU and under OPT replacement policies. Another study [22] included space-time graphs for some SPEC95 benchmarks. Though their graphs are for a much shorter duration of execution execution (on the order of one second), the graphs are similar to our graphs for the SPEC95 benchmarks. Finally, one study of large-scale multiprocessor architectures investigated the “working-set” and cache size issues for parallel scientific applications [24]. The study investigated a number of parallel applications, measuring their “working-sets” by simulating the number of cache misses versus cache sizes under the LRU replacement. The cache misses versus cache size curves in [24] are quite similar to our LRU page fault curves for scientific applications. These studies suggest that the reference behavior at page level might be similar to that at cache line level. We plan to investigate this correlation. Sequence detection can be used for prefetching purposes as well. Indeed there are sequence detectors for prefetching in hardware cache management [26, 15, 21]. However, prefetching does not reduce bandwidth consumption; it merely reduces latency by overlapping I/O with computation. Good replacement policies, on the other hand, reduce both bandwidth consumption and latency. In this paper we focused on replacement algorithms only; how to balance prefetching and cache management (page replacement) is a complicated issue that needs further study [8]. 7 Conclusions and Future Work Our study of application reference behavior and space-time graphs shows that applications' memory reference behavior varies significantly. There are at least three categories: no visible access pattern, minor observable patterns, and regular patterns. We found that LRU performs similarly to OPT, though incurring roughly twice as many page faults, for the memory-intensive and pattern-less applications. However, LRU performs poorly for regular-pattern applications. We proposed a new replacement algorithm, SEQ. SEQ detects linear access patterns (sequential behavior) and performs semi-MRU replacement on sequences associated with such patterns. SEQ performs similarly to LRU for memory-intensive applications, and corrects the LRU flooding problem for many regular-pattern applications. Indeed SEQ's performance approaches that of OPT for a number of regular-pattern applications. We also found that for multi-process systems, SEQ appears to be a good algorithm for global replacement. Comparison of global LRU and global SEQ show that global SEQ can effectively improve multi-application performance just as it improves single application performance. There are a number of limitations in our work. We need to experiment SEQ on a wider variety of applications. Kernel implementation of SEQ is underway to test its performance in real systems. Finally, we plan to incorporate prefetching in SEQ. Acknowledgements We would like to thank our referees for their detailed comments and for pointing out a simulation error (which we have fixed) in the earlier version of the paper. Mark Hill, Mary Vernon and Doug Burger provided helpful feedback on an early draft of this paper. The research is partially supported by a generous grant from Intel Corporation. References Figure 11: Performance of Global LRU and Global SEQ for concurrent execution of applications.
{"Source-Url": "http://research.cs.wisc.edu/techreports/1997/TR1338.pdf", "len_cl100k_base": 9148, "olmocr-version": "0.1.53", "pdf-total-pages": 28, "total-fallback-pages": 0, "total-input-tokens": 32004, "total-output-tokens": 10702, "length": "2e13", "weborganizer": {"__label__adult": 0.00040793418884277344, "__label__art_design": 0.0004870891571044922, "__label__crime_law": 0.0004336833953857422, "__label__education_jobs": 0.0007100105285644531, "__label__entertainment": 0.00013172626495361328, "__label__fashion_beauty": 0.00021183490753173828, "__label__finance_business": 0.0003941059112548828, "__label__food_dining": 0.0003633499145507813, "__label__games": 0.000827789306640625, "__label__hardware": 0.007415771484375, "__label__health": 0.0006232261657714844, "__label__history": 0.0004732608795166016, "__label__home_hobbies": 0.0001480579376220703, "__label__industrial": 0.0008525848388671875, "__label__literature": 0.0003180503845214844, "__label__politics": 0.00034308433532714844, "__label__religion": 0.000579833984375, "__label__science_tech": 0.323486328125, "__label__social_life": 7.647275924682617e-05, "__label__software": 0.0146636962890625, "__label__software_dev": 0.6455078125, "__label__sports_fitness": 0.0003614425659179687, "__label__transportation": 0.0008254051208496094, "__label__travel": 0.00022351741790771484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43312, 0.04601]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43312, 0.34326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43312, 0.92481]], "google_gemma-3-12b-it_contains_pii": [[0, 122, false], [122, 122, null], [122, 4804, null], [4804, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 16184, null], [16184, 16349, null], [16349, 16499, null], [16499, 16705, null], [16705, 16954, null], [16954, 23602, null], [23602, 29478, null], [29478, 29960, null], [29960, 30513, null], [30513, 30891, null], [30891, 35198, null], [35198, 41295, null], [41295, 41389, null], [41389, 43227, null], [43227, 43312, null], [43312, 43312, null]], "google_gemma-3-12b-it_is_public_document": [[0, 122, true], [122, 122, null], [122, 4804, null], [4804, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 10229, null], [10229, 16184, null], [16184, 16349, null], [16349, 16499, null], [16499, 16705, null], [16705, 16954, null], [16954, 23602, null], [23602, 29478, null], [29478, 29960, null], [29960, 30513, null], [30513, 30891, null], [30891, 35198, null], [35198, 41295, null], [41295, 41389, null], [41389, 43227, null], [43227, 43312, null], [43312, 43312, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43312, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43312, null]], "pdf_page_numbers": [[0, 122, 1], [122, 122, 2], [122, 4804, 3], [4804, 10229, 4], [10229, 10229, 5], [10229, 10229, 6], [10229, 10229, 7], [10229, 10229, 8], [10229, 10229, 9], [10229, 10229, 10], [10229, 10229, 11], [10229, 10229, 12], [10229, 16184, 13], [16184, 16349, 14], [16349, 16499, 15], [16499, 16705, 16], [16705, 16954, 17], [16954, 23602, 18], [23602, 29478, 19], [29478, 29960, 20], [29960, 30513, 21], [30513, 30891, 22], [30891, 35198, 23], [35198, 41295, 24], [41295, 41389, 25], [41389, 43227, 26], [43227, 43312, 27], [43312, 43312, 28]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43312, 0.13043]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
2e346c6a8dbc9f1ecafe0fec70b34ead97538f18
[REMOVED]
{"len_cl100k_base": 8665, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 41784, "total-output-tokens": 9527, "length": "2e13", "weborganizer": {"__label__adult": 0.0002510547637939453, "__label__art_design": 0.0003399848937988281, "__label__crime_law": 0.000194549560546875, "__label__education_jobs": 0.0010595321655273438, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.0001119375228881836, "__label__finance_business": 0.0005140304565429688, "__label__food_dining": 0.0002008676528930664, "__label__games": 0.0004122257232666016, "__label__hardware": 0.00252532958984375, "__label__health": 0.00027561187744140625, "__label__history": 0.00022327899932861328, "__label__home_hobbies": 9.435415267944336e-05, "__label__industrial": 0.0006337165832519531, "__label__literature": 0.00017178058624267578, "__label__politics": 0.00012063980102539062, "__label__religion": 0.0003185272216796875, "__label__science_tech": 0.040740966796875, "__label__social_life": 5.167722702026367e-05, "__label__software": 0.027191162109375, "__label__software_dev": 0.923828125, "__label__sports_fitness": 0.00014221668243408203, "__label__transportation": 0.0003809928894042969, "__label__travel": 0.00013446807861328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42957, 0.02399]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42957, 0.34651]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42957, 0.93443]], "google_gemma-3-12b-it_contains_pii": [[0, 2751, false], [2751, 4856, null], [4856, 8965, null], [8965, 13172, null], [13172, 18162, null], [18162, 20667, null], [20667, 23352, null], [23352, 24905, null], [24905, 27278, null], [27278, 31680, null], [31680, 36017, null], [36017, 37415, null], [37415, 38821, null], [38821, 42957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2751, true], [2751, 4856, null], [4856, 8965, null], [8965, 13172, null], [13172, 18162, null], [18162, 20667, null], [20667, 23352, null], [23352, 24905, null], [24905, 27278, null], [27278, 31680, null], [31680, 36017, null], [36017, 37415, null], [37415, 38821, null], [38821, 42957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42957, null]], "pdf_page_numbers": [[0, 2751, 1], [2751, 4856, 2], [4856, 8965, 3], [8965, 13172, 4], [13172, 18162, 5], [18162, 20667, 6], [20667, 23352, 7], [23352, 24905, 8], [24905, 27278, 9], [27278, 31680, 10], [31680, 36017, 11], [36017, 37415, 12], [37415, 38821, 13], [38821, 42957, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42957, 0.05882]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
a13c316a6e91801fe75cd841506ee19d395d46d3
Towards an Industrial Use of Sound Static Analysis for the Verification of Concurrent Embedded Avionics Software Antoine Miné École Normale Supérieure 45, rue d’Ulm F-75230 Paris Cedes 05, France mine@di.ens.fr David Delmas Airbus Operations S.A.S. 316, route de Bayonne 31060 Toulouse Cedex 9, France david.delmas@airbus.com ABSTRACT Formal methods, and in particular sound static analyses, have been recognized by Certification Authorities as reliable methods to certify embedded avionics software. For sequential C software, industrial static analyzers, such as Astrée, already exist and are deployed. This is not the case for concurrent C software. This article discusses the requirements for sound static analysis of concurrent embedded software at Airbus and presents AstréeA, an extension of Astrée with the potential to address these requirements: it is scalable and reports soundly all run-time errors with few false positives. We illustrate this potential on a variety of case studies targeting different avionics software components, including large ARINC 653 and POSIX threads applications, and a small part of an operating system. While the experiments on some case studies were conducted in an academic setting, others were conducted in an industrial setting by engineers, hinting at the maturity of our approach. Categories and Subject Descriptors C.3 [Special-Purpose and Application-Based Systems]: Real-Time and Embedded Systems; D.1.3 [Programming Techniques]: Concurrent Programming; D.2.4 [Software Engineering]: Software/Program Verification—Formal methods, Validation, Assertion checkers; F.3.1 [Logics and Meanings of Programs]: Specifying and Verifying and Reasoning About Programs—Assertions, Invariants, Mechanical verification; F.3.2 [Logics and Meanings of Programs]: Semantics of Programming Languages—Program analysis Keywords Static analysis, abstract interpretation, embedded software, concurrent software *This work is supported by the INRIA project “Abstraction” common to CNRS and ENS in France, and by the project ANR-11-INSE-014 from the French ANR. General Terms Experimentation, Reliability, Verification 1. INTRODUCTION The safety of embedded critical software, such as those found in avionics, automotive, space, medical, and power industries is crucial, as the slightest software error can have dramatic consequences. Their verification and validation process is well specified in domain-specific international standards (e.g., [1] for avionics systems). While testing remains a key method, its shortcomings are well-known, and there is a strong movement towards formal methods to complement or replace them. Such methods provide strong, mathematical guarantees about systems behaviors. In particular, semantic-based static analysis can discover at compile-time properties of the dynamic behaviors of programs by analysis of the source code; it is automated, sound (full control and data coverage), and can be made precise and efficient by employing powerful abstractions (as advocated by abstract interpretation [8]), making it an attractive method in an industrial context, where the cost of deploying new methods must be taken into account. Nowadays, commercial static analysis tools are deployed in embedded industries. One example is the Astrée static analyzer [4], which detects all the run-time errors in embedded C code. Astrée is however limited to sequential codes and is not sound for concurrent codes, which constitute an increasing share of critical embedded software. Concurrent software is also a prime target for formal verification because testing methods scale poorly with the combinatorial explosion of concurrent executions. This article discusses AstréeA, a recent extension of Astrée to analyze soundly, efficiently, and precisely concurrent codes, and its application to verify avionics code from Airbus. Sec. 2 discusses the place of formal methods in avionics certification and its implementation at Airbus, Sec. 3 presents the challenges of certifying concurrent avionics software, Sec. 4 presents the technology behind AstréeA, Sec. 5 presents on some case studies how AstréeA can address these challenges, Sec. 6 discusses related work, and Sec. 7 concludes. The foundations and use of Astrée were covered, from both academic and industrial perspectives, in a number of publications [5, 4]. The theoretical foundations underlying AstréeA were covered in [17, 18]. This article discusses the effective use of AstréeA. It presents novel case studies, introducing for the first time studies performed by industrial end-users, and builds a case for the widespread adoption of sound static analysis to verify concurrent embedded software. It brings an industrial perspective to AstréeA. 2. SEMANTIC VERIFICATION AT AIRBUS 2.1 Industrial context Avionics software running on on-board computers is a critical component of the systems of civil aircraft. They are thus subject to certification by Certification Authorities, and developed according to stringent rules imposed by the applicable DO-178/ED-12 international standards. Among the many processes described in DO-178, verification processes are responsible for more than half of the overall costs of avionics software development. Considering the steady increase in size and complexity of this kind of software, classical V&V processes, based on massive testing campaigns and complementary intellectual reviews and analyses, no longer scale up within reasonable costs. Some formal verification techniques have been shown to scale up to real-size industrial software. For a decade, Airbus has therefore been introducing such techniques into their own verification processes [20], in order to replace or complement legacy methods. Significant effort is currently being invested into updating Airbus avionics software development and verification processes to take maximum advantage from formal methods, i.e., improve industrial efficiency while maintaining the safety and availability of avionics systems. 2.2 The requirement for soundness Revision B of DO-178 states that software verification process objectives are met through a combination of reviews, analyses, and testing. It mentions formal methods as an alternative method, but does not provide any guidance, due to inadequate maturity at the time the document was written and limited applicability to airborne software. Therefore, DO-178B compliant avionics software verification processes cannot rely on formal techniques. This issue was addressed in revision C [1] of DO-178, applicable to new software developments as of 2014. It introduces a technical supplement, DO-333, providing guidance on the use of formal techniques to meet DO-178C objectives. The supplement introduces standard categories of formal analysis techniques: deductive methods, model-checking, and abstract interpretation. Abstract interpretation, in particular, is presented as a method to construct semantic-based analysis algorithms for the automatic, static, and sound determination of dynamic properties of infinite-state programs. It emphasizes soundness as the key criterion for an analysis to be considered compliant: the applicant is required to provide justifications that the method never asserts that a property is true when it may not be true. DO-333 acknowledges that objectives of reviews and analyses of source code can be achieved using formal methods, provided a formal semantics is well-defined at source code level. Such objectives include: compliance with the software architecture (correct data and control flows), accuracy and consistency (stack and memory usage, floating-point arithmetic, resource contention and limitations, worst-case execution time, exception handling, use of uninitialized variables, and data corruption due to task or interrupt conflicts). It also acknowledges that some verification objectives traditionally addressed by testing executable object code can be achieved using formal techniques. Such objectives include robustness to complex incoming data, freedom from arithmetic overflows, data corruption, inadequate numerical resolution, incorrect responses to missing or corrupted input data, incorrect handling of exceptions, arithmetic faults, violations of array limits. Such analyses should be performed either on executable object code, or at source code level, provided that property preservation can be demonstrated between source and executable code. We refer the reader to [6] for more information on DO-178C. 2.3 Static analysis at Airbus Several formal techniques are currently being used operationally as part of the verification processes of avionics software. In order to verify functional properties, program proof techniques have been successfully introduced to replace unit testing on some software subsets. They are used for certification credit [12] on small sequential C codes. In contrast, the verification of non-functional properties requires automatic analyses that scale up to very large programs. As a consequence, static analysis based on abstract interpretation [8] is currently used industrially for certification credit on many avionics software products developed at Airbus, to compute safe upper-bounds of stack consumption or worst-case execution time [19], or to verify data and control flows [10]. Among major non-functional properties of interest is freedom from run-time errors, i.e., integer or floating-point overflow or division by zero, array out of bounds, invalid pointer dereference, etc. Many commercial bug finders allow for the automatic detection of possible run-time errors. While useful in a lightweight debugging approach to help detect systematic errors, such tools cannot be used for verification purposes. Indeed they implement unsound analysis methods, based on (implicit and incorrect) simplifying assumptions, and thus do not attempt to provide any assurance that code free from warnings will actually run without errors. In contrast, semantic based static analyzers allow for automatic proofs of absence of run-time errors on sequential and synchronous software written in the C language. For instance, Astre is being used industrially at Airbus on safety-critical synchronous control/command programs [11] prior to certification. These programs are large (up to 650 000 lines of C), and perform intensive floating-point computations. They are certified with the highest DO-178 Development Assurance Level (DAL A). Airbus plans to claim certification credit from the use of Astre in the near future, in order to alleviate intellectual reviews and analyses of source code. To this aim, Astre will have to undergo a dedicated qualification process, as defined by the DO-178 standard. In addition, extensive experiments [3] are being conducted with the CompCert formally verified compiler to allow optimizing compilation of DAL A software, while guaranteeing semantic preservation between source and executable code. This guarantee will enlarge the scope of sound source code analyzers to also achieve verification objectives that traditionally require a machine code level analysis. In particular, it will enable Airbus to use Astre to alleviate robustness testing, but also remove part of the local robustness code (proved to be unnecessary), with obvious benefits for worst-case execution time and structural coverage analysis. 3. CONCURRENT AVIONICS SOFTWARE As of today, the scope of sound formal verification has been mostly limited to sequential and synchronous software. This encompasses the most safety-critical systems of Airbus aircraft, e.g. flight control systems. Such systems are designed for dedicated specialized computers running avionics software on simple, deterministic architectures, in order to ease verification. For instance, Airbus fly-by-wire control-command software is bare metal synchronous programs. In contrast, for less safety-critical aircraft functions, the required level of automation, HMI comfort, and system interoperability and configurability tends to increase from one aircraft generation to the next, resulting in an exponential increase of the complexity of the underlying embedded computers, networks and software. Moreover, considering the steady pressure on cost and weight reduction, such avionics functions tend to be integrated into generic computers, as shown by the current trend for Integrated Modular Avionics [23]. As a consequence, the development of such complex functions leads to more sophisticated designs, involving synchronizations between concurrent tasks, shared resource management, etc. They are implemented in asynchronous software, composed of a set of concurrent threads interacting in a shared memory. Traditional verification techniques, based on reviews, analyses and tests, are especially inefficient on asynchronous programs, because of the huge number of thread interleavings to be considered. Nonetheless, despite the limited impact of potential failures of related systems on aircraft safety, software correctness is of paramount importance for efficient aircraft operation and maintenance, and thus on aircraft availability and profitability. Formal techniques, e.g. static analysis, would thus be especially useful for scalable verification of asynchronous software. ÀstréeA is the first example of such a sound static analyzer. It is an extension of Àstrée aiming at proving the absence of run-time error in asynchronous multithreaded C software. The current underlying model matches that of asynchronous applications developed at Airbus. Such applications run on a single mono-core processor, on top of commodity real-time operating systems, implementing a preemptive, priority-based, real-time scheduling policy, e.g. the ARINC 653 [2] standard or POSIX threads real-time [14]. Application software is analyzed for a given specification of the operating system, and is thus sound for all operating systems meeting this specification. In practice, such specifications are formalized by means of a library of stubs for the API functions of the operating systems, which are written in C with dedicated ÀstréeA primitives. Some characteristics thereof may ease static analysis: for instance, all threads are created in an initialization phase (no dynamic thread creation), and dynamic memory allocation and recursion are forbidden by coding standards. However, some other characteristics are challenging. These are large data-intensive programs, from a few hundred thousand to a few million lines of C source code, composed of many nested loops processing string buffers as well as rich data structures based on pointers, e.g. statically allocated linked lists. Threads communicate through shared memory and standard synchronization primitives offered by the operating system (such as POSIX mutexes). Due to stringent (though usually not hard real-time) timing constraints, such software may rely on real-time scheduling to implement (implicit) critical sections, so as to save on synchronization primitives with significant worst-case execution time. A more recent research direction (Sec. 5.3) is the verification of actual operating systems (or fragments of thereof). 4. THE ÀSTRÉEÀ STATIC ANALYZER 4.1 Abstract Interpretation In the general sense, the safety verification problem consists in computing the set of program states reachable during all possible executions and proving that it does not include any unsafe state. Here, a program state denotes the current contents of the memory (a map from variables to values) and program position (PC, call stack). Program executions and unsafe states are defined based on the language standards, which must be translated from an informal English description to an unambiguous mathematical one. Industrial programs typically feature a large state space, which cannot be enumerated in practice. The core idea of abstract interpretation [8] is that it is often sufficient to reason at a more abstract, simpler level. Instead of considering sets of memory states (so-called concrete elements), we can, for instance, consider an interval abstraction that only remembers the upper and lower bounds of each variable, and forgets the exact values reachable within these bounds. An abstract element then represents the set of memory states that satisfy these bounds, i.e., a property on states, but with a more compact representation (two numbers per variable instead of an unbounded set of memory maps). We will compute the reachable states entirely in this abstract domain. Naturally, some concrete properties cannot be represented in the abstract and must be approximated. The soundness principle formally states that any property computed in the abstract must also be true of all the concrete executions. For safety verification, this means that the computed abstract states must over-approximate the concrete ones. The over-approximation may, in certain cases, add spurious unsafe states that do not exist in the concrete semantics, i.e., we get a false alarm, which we want to avoid. In practice, the reachable states of a program are defined by composing in a generic way atomic semantic operations from a small alphabet corresponding to language constructs (assignments, tests, etc.). It is thus sufficient to provide a sound abstract version for each atomic operation and combine them to compute the abstract semantics of any program. Abstract operators often induce a loss of precision that accumulates over the abstract computation, so that we may not be able to infer the most precise property expressible in the abstract domain (e.g., the interval domain may not find the tightest bounds). To reduce the false alarm rate, it is necessary to resort to more powerful (but more costly) abstractions. Many general-purpose abstractions are already available (for instance, we may replace intervals with polyhedra or infer linear relationships), and novel ones can be designed to specifically remove classes of false alarms. 4.2 Design Principles for Àstrée and ÀstréeÀ The ÀstréeÀ analyzer is based on the same design principles as the Àstrée analyzer which it extends: both are specialized modular analyzers by syntax-directed interpretation of programs in a collection of abstract domains. We recall here briefly these common principles (an in-depth presentation can be found in [4]), while Sec. 4.3 is devoted to the extension to concurrency implemented in ÀstréeÀ. 1More precisely, ambiguity in the standard can be rigorously formalized using a non-deterministic semantics, which defines the set of all possible executions. This allows verifying, in one analysis, the correctness with respect to several interpretations of the standard. Non-determinism is also useful to model interactions with an unknown environment. 4.2.1 Concrete semantics Astréé analyzes a large subset of C, including integers, floats, pointers, structured data-types, loops, gotos. It excludes notably: dynamic memory allocation, recursion, and long jumps. The concrete semantics is based on the C standard and the floating-point arithmetic standard. The analyzer reports all run-time errors, including: arithmetic overflows, invalid operations, invalid dereferences, assertion failures. The C semantics is notable for being under-defined: it leaves a lot of room for implementation choices and maps errors to undefined behaviors which have a truly random (possibly catastrophic) outcome. To fit more closely programmers’ expectations, Astréé allows specializing the analysis by describing the actual implementation, among reasonable choices, for unspecified or undefined behaviors. For instance, by default Astréé reports signed integer overflows and continues the analysis assuming the wrap-around result. The semantics of floats includes a sound model of rounding errors, as well as infinities and not-a-number. Astréé’s pointer and memory semantics is also very lax and low-level, allowing unrestricted pointer arithmetic, casts, union types, and even type-punning. 4.2.2 Syntax-directed interpretation Astréé functions literally as an interpreter, except that instead of propagating a single environment, it propagates an abstraction of a set of environments. Astréé’s iterator traverses the syntax tree of the program, starting from the entry of the main function. Complex control structures are handled by induction on their sub-components and, ultimately, the iterator calls the abstract domain only on atomic statements, such as assignments and tests: - For tests if \((c) s_1 \text{ else } s_2\), both branches \(s_1\) and \(s_2\) are interpreted, after filtering the current abstract environment by, respectively, the condition \(c\) and its negation \(\neg c\) and then the outputs of the branches are merged, using an abstract version of the union of state sets (e.g., the interval hull in the interval domain). - Loops present the main difficulty for automated verification: they generate a large, possibly infinite number of executions of unbounded length that cannot be all explored explicitly, and must thus be approximated. In abstract interpretation, a loop while \((c)\) \(s\) is handled by iteration: starting from the abstract environment before the loop, we accumulate the iterated effect of the loop body \(s\) filtered by the loop condition \(c\) until reaching a fixpoint. A special operator, the widening \(\triangledown\), is used to accelerate the iteration and reach a fixpoint in a finite, generally small, number of steps. For instance, the standard interval widening enlarges unstable bounds to the type’s extreme values in one step, where they cannot grow further. Upon stabilization, the abstract environment represents an inductive loop invariant. The analysis continues after the loop using the invariant filtered by the exit condition \(\neg c\). - Functions are analyzed by interpreting the body of the function at every call site. The result is a fully flow-sensitive and context-sensitive analysis, and is hence very precise. Such precision can come at a cost as full sensitivity does not scale up for programs with complex control flow or recursive functions; however, for embedded software, which employs neither, it achieves a sweet spot between precision and scalability. 4.2.3 Collections of domains Astréé does not use a single monolithic abstract representation of memory states, but rather a collection of interacting abstract domains, which provides improved scalability and flexibility to the analysis design. Astréé uses a stack of domains, corresponding to different levels of complexity in C expressions. Each domain handles specific C constructions, abstracts a specific aspect of the memory, and uses this information to dynamically translate expressions using simpler constructions for lower domains to handle. A memory domain decomposes each variable into a collection of scalar cells (integers, pointers, floats), and resolves dereferences, structures, arrays, and union accesses. A pointer domain maintains the base variables targeted by each pointer cell, and translates pointer arithmetic into integer arithmetic on the offset. The numeric domains are left with abstracting numeric cells and need only handle expressions containing integer and float values. The numeric abstraction is actually composed of a large set of domains, as software performs a huge variety of computations that lead to different flavors of numeric properties that no single domain can maintain efficiently nor conveniently. Information about the abstract state is distributed among these domains, which collaborate to achieve the analysis of every single C statement. The domains communicate by transferring information expressed in a set of common properties (such as variable bounds), on a per-need basis to ensure efficiency. 4.2.4 Specialization There is no universal abstraction able to analyze adequately all programs. Astréé is based on a specialization principle: it contains abstractions specifically adapted to analyze very well a given kind of programs, embedded avionics control-command software, while it is sound but possibly imprecise or not scalable on other software. It was designed by starting from a generic scalable interval analyzer and analyzing a selected code in the family of interest. Initial analyses suffered from many false alarms. They were removed by a manual process consisting in inspecting the origin of each false alarm, determining which property was missed by the analyzer, and either designing a new abstract domain, if the property was not expressible in Astréé, or strengthening the abstract operators and domain communication, if it was. This resulted in an analyzer with no alarm on the code of interest, which remains efficient (by parsimony, more complex and costly abstractions are added only when necessary). Abstract domains are not specific to a single program but general enough to handle a programming pattern. We observed experimentally [5] that a specialized analyzer was able to handle a family of similar programs, requiring only slight adjustments of analysis parameters (such as widening aggressiveness) which can be performed by industrial end-users [11]. The complete report of this experiment is described in [5]. We give here only two examples of additional abstractions. Firstly, the precise analysis of loops required the addition of relational domains. As general polyhedra are... not scalable, we used instead the octagon domain [16], an expressive enough domain. A reasonable tradeoff between precision and scalability was further achieved by applying the domain parsimoniously, to selected variable packs. Secondly, control-command software features digital filter computations, which require quadratic invariant relations to be precisely analyzed. Penet thus designed a specific ellipsoid domain [13] for this task. The octagon domain is of general use and was employed in AstréeA, while ellipsoids are specific to control-command software and not reused in AstréeA. 4.3 Extension to concurrent programs AstréeA extends Astrée to handle concurrent embedded C programs. It reuses its iterator and abstractions, and adds new ones, as discussed here. 4.3.1 Concrete semantics AstréeA supports a generic concurrency model. Program executions have two phases: firstly, an initialization phase, able to execute sequential code and create threads, and a second phase, where threads execute concurrently but thread creation is no longer allowed. This matches precisely the ARINC 653 semantics [2] as well as current practice in avionics software. AstréeA analyzes both phases for run-time errors; in addition, it collects the set of threads created during the sequential phase and checks that no thread is created during the concurrent phase. The set of threads is thus not fixed beforehand but discovered by the analyzer. In the second phase, the semantic of the program is an interleaving of atomic execution steps (such as assignments) from each thread. AstréeA assumes a mono-core real-time execution model: only the unblocked thread of highest priority can run. This matches current avionics practice and permits a more precise analysis than considering true parallel execution (such an extension will be considered in the future). The execution model is fully preemptive: a high priority thread can enter a wait state, e.g. waiting for a resource to be available, let a lower-priority thread run for a while, and preempt it at any point of its execution when the resource becomes available. As a consequence, a large number of interleavings of concrete executions are possible. Threads execute in a shared memory: it is the analysis’ responsibility to detect which global variables are actually shared and their possible values. AstréeA supports a small set of flexible but low-level primitives on top of which C stub code simulating high-level concurrency libraries can be written; for instance, it supports non-nesting mutual exclusion locks, on top of which more complex locks are constructed (Sec. 5). AstréeA reports data-races for accesses not protected by locks. 4.3.2 Thread-modular interpretation Interleaving over all possible thread interleavings is not feasible. AstréeA thus employs a thread-modular analysis approach, inspired from rely-guarantee reasoning [15]. It is sound with respect to all interleavings, scalable, and allows reusing the abstractions developed for Astrée. This analysis is composed of a sequence of iterations. In the first one, each thread is analyzed as an independent sequential program, ignoring the effect of other threads (which is unsound), but collecting the effect it has on the global variables. Starting from the second step, each thread is reanalyzed, but now taking into account the interferences computed in the last step, which discovers new behaviors, and possibly new interferences. The analyses are iterated until the interferences stabilize, at which point we have explored a superset of all possible program behaviors and reported all the possible errors. The increasing sequence of interferences is stabilized efficiently by using a widening. When several threads execute the same code, it is only necessary to analyze them once per iteration; this enables the efficient analysis of programs creating an arbitrary (possibly unbounded) number of instances of some threads. The theoretical foundation of this method is presented in [17, 18]. 4.3.3 Interference abstraction Thread-modular analyses introduce the notion of interference. As for program states, interference sets are not computed exactly but rather over-approximated at some abstract level. Interferences require abstract domains of a different nature because they represent state transitions. Initially, AstréeA used a simple interference abstraction, which only remembers, for each variable and thread, an interval over-approximating the values stored by that thread to that variable. This technique is very scalable and sufficient in many cases. It was refined by adding a measure of flow-sensitivity and relationality. Firstly, AstréeA takes mutual exclusion into account by partitioning according to the locks taken and thread priorities, thus removing spurious interferences. This can be further improved by using relational domains (such as octagons [16]) to track lock invariants, i.e., relations between variables that are maintained by locks: they may be temporarily invalidated inside locks, but are restored before releasing the lock, hence, the violation remains invisible as long as all accesses are correctly protected (which is checked by AstréeA). As second example, AstréeA includes a domain tracking which variables are incremented. This is an example of specialization: the domain was added specifically to analyze code with clocks that are sampled and integrated, as shown in Fig. 3. The correctness of the program depends on the fact that, between two successive reads of a clock by a thread, the clock can only be incremented by other threads and never decremented. Additionally, these abstractions can be proved to be sound with respect to widespread weakly consistent memory models (such as Total Store Ordering, used notably for multi-core intel x86). 4.3.4 Additional abstractions In addition to being concurrent, the codes we consider in our study are more general than the control-command codes considered by Astrée. Handling them precisely and efficiently required some specialization of the abstractions. Firstly, we need to analyze software making an extensive use of large data-structures, such as nested arrays and structures. We have thus enriched the abstraction with a notion of dynamic array folding: a contiguous sequence of array elements can be represented with a single abstract element. Arrays start unfolded, and are folded dynamically when encountering an imprecise pointer targeting many array elements. This improves the time and memory efficiency without jeopardizing precision. We also developed a new numeric domain for offsets able to represent succinctly complex access patterns (e.g., \( \{ 4 \times i + 100 \times j \mid i \in [0, 10], j \in [0, 10] \} \)) is an access to a part of a matrix flattened, as is common in C, into a uni-dimensional array). Similar dynamic abstractions are employed to merge dynamically several concrete variables into an abstract one, and we also defined a pointer Figure 1: Summary of case studies with the original size (in lines), the lines of code added for the analysis, the selectivity, the time and memory consumption, and the analysis context (academic or industrial setting). widening to accelerate loops accessing linked lists. Secondly, the case studies contain more complex control than usually found in control-command software, including deeply-nested functions and loops. A drawback of interpretation by induction on the syntax is that, to analyze nested loops, the inner loop must be completely reanalyzed for each iteration of the outer loop. We solved this problem by caching loop invariants, and reusing them to bootstrap subsequent analyses of the same loops, reducing the number of iterations needed to find a new invariant. The cache also accelerates the iterations required to stabilize interferences. 5. CASE STUDIES AND EXPERIMENTS We have applied Astrée to the analysis of various industrial concurrent cockpit avionics software. The case studies are described in the following sections, and the results are summarized in Fig. 1. Additionally to the size of the use cases, we indicate the number of lines we had to add or modify to perform the analysis (which gives an idea of how much work is required to prepare a new analysis), and the selectivity (a measure of precision defined as the percentage of alarm-free lines). Some information is omitted due to non-disclosure agreements. Some analyses were performed by researchers in an academic setting, and others by engineers in an industrial setting. A preliminary version of the first case study (Sec. 5.1.1) has been presented in [17]; it is presented here updated and in more details. The other cases studies are new. 5.1 Analysis of ARINC 653 Applications 5.1.1 Primary case study (DAL C) Our first analysis target is a large embedded avionics code, featuring 15 threads and 2.1 M lines of C (after preprocessing and removing redundant declarations). This DAL C application monitors and aggregates a large number of data coming from ports and displays synthetic summaries in an interactive way in the cockpit. It contains 100 K lines of hand-written C code performing a variety of tasks, including: parsing binary messages, formatting strings, managing and sorting arrays and lists, as well as 2 M lines of automatically generated code, in particular reactive synchronous logic à la SCADE running in threads concurrently with other tasks and featuring boolean, integer, and float computations. We analyze almost completely the original application; to simplify, we omitted the custom error handler, which eventually halts the application and thus does not add errors. ARINC 653 model. Astrée and AstréeA are whole-program analyzers, that take as input programs without undefined symbols. This is not a problem for Astrée as it focuses on synchronous programs that run on bare metal, and are inherently self-contained. On the contrary, the programs analyzed by AstréeA interact with an operating system through function calls. For instance, the applications considered in this section run on ARINC 653, a specification for embedded avionics real-time operating systems [2]. We analyze the application without the actual system implementation, but with a hand-crafted model of its specification. This has several benefits: the analysis is sound with respect to any system implementation obeying the specification; the analyzed code does not exhibit hard-to-analyze low-level features encountered in system implementations; and we can enrich the specification with assertions to check that the application obeys API contracts. Internally, AstréeA supports several kinds of objects and a set of primitives to manipulate them, including for instance: - Threads,4 which must be registered during a sequential initialization phase with a directive __ASTREE_create_process(i,p,f). They are assigned by the program an integer identifier i, an integer priority p, and an entry point f. Other primitives, taking as argument, can change the thread state (e.g., stopping, pausing, yielding, etc.). - Mutexes, which are also denoted by integers. AstréeA assigns a mutex to every 32-bit integer i, and so, mutexes do not need to be registered before being used. For instance, the primitive __ASTREE_lock_mutex(i) will simply lock the mutex identified by i. ARINC 653 objects are, however, more complex. They must be created during initialization and have a rich set of API functions as well as properties (such as a name). The model thus consists of an abstract implementation of each API function written in C enriched with built-in primitives, so that the combination of the analyzed application and the model is a stand-alone program with no undefined symbols. Figure 2 gives, as example, a simplified version of our stub for semaphore locking, and illustrates certain interesting points: we validate arguments to report API violations (__ASTREE_error); we distinguish between locking with and without a timeout (__ASTREE_lock and __ASTREE_yield include a non-deterministic wait allowing rescheduling lower-priority threads5); we model both the case where a timeout occurs without the semaphore being locked and a successful locking and select between them with a non-determinism choice (__ASTREE_rand) to ensure that the analysis soundly consider both cases. Identifiers for ARINC 653 and AstréeA objects (such as SEMAPHORE_ID) are allocated at creation time and all their properties are maintained in plain C arrays. The built-in support for non-determinism makes it very easy to model soundly an unknown environment; for instance, we model reading a message from a port connected to another application, which is not analyzed, as reading 4Execution units are actually called processes in ARINC 653, but behave like POSIX threads as they execute in a shared memory. We call them thread here for consistency. 5Note that our system is fully preemptive: the current thread can return from __ASTREE_yield by interrupting a lower priority thread at any point of its execution. void WAIT_SEMAPHORE(SEMAPHORE_ID_TYPE SEMAPHORE_ID, SYSTEM_TIME_TYPE TIMEOUT, RETURN_CODE_TYPE * RETURN_CODE) { *RETURN_CODE = NO_ERROR; if (SEMAPHORE_ID<0 || SEMAPHORE_ID>NB_SEMAPHORE) { __ASTREE_error("invalid semaphore"); } else if (TIMEOUT>0) { if (TIMEOUT==INFINITE_SYSTEM_TIME_VALUE || __ASTREE_rand()) *RETURN_CODE = NOT_AVAILABLE; else { __ASTREE_lock_mutex(SEMAPHORE_ID); } } else ( __ASTREE_yield(); } __ASTREE_lock_mutex(SEMAPHORE_ID); } #define CLOCK(Dst,Enable,Clock) { static unsigned Prev; if (Enable) { __ASTREE_octagon(Dst,Prev,Clock); if (Enable) { Dst += Clock - Prev; Prev = Clock; } else Prev = Clock; } } Figure 3: Time accumulator. The precise analysis requires proving that other threads only increment Clock, and tracking relations with the octagon domain (hence the __ASTREE_octagon directive). non-deterministic values. We implemented 66 ARINC 653 system calls in a 3.9 K lines model. Note that the correction of the analysis depends on the correction of the stubs: they must include all the behaviors of the actual OS implementation. However, the quantity of code to be trusted is small (1 stub line for 500 lines of application), stubs can be reused from one analysis of an ARINC 653 application to the other, and are easily understandable by engineers, which improves our confidence in their correctness. Analysis results and refinement. Currently, our analysis exhibits 1095 alarms, i.e., a 99.94% selectivity. The selectivity is higher (99.98%) on automatically generated parts than in manual parts (99.2%), which is expected as the latter is more complex and less regular (although much smaller). Early experiments using Astrée's state abstractions and a non-relational flow-insensitive interference abstraction reported more than 12000 alarms [17], and this number was decreased by specialization in AstréeA. Similarly to our experiments with Astrée this included the design of new abstractions. In some cases, however, AstréeA already contained adequate domains and it was sufficient to configure the analyzer to use them on specific variables and program parts (by default, costly domains are only used parsimoniously for scalability reasons). An example is given in Fig 3: the three variables Clock (a monotonic clock), Prev (its previously sampled value), and Dst (a time accumulator) must be related in the octagon domain. This information is communicated to the analyzer through the insertion of a directive, __ASTREE_octagon. Additional directives are used to gain precision by unrolling loops, handling arrays in a field-sensitive way, or enabling path-sensitivity. We added 2302 directives, most of which (2183) appear in manual code; directives in automatically generated code appear in macros that are massively duplicated by macro-expansion. Ultimately, adapted heuristics can be incorporated to control the precision and achieve 5.1.2 Additional case study (DAL C) Our second application of AstréeA is the analysis of an application featuring 11 threads, and implementing similar functions as the first one, but for a different aircraft. As a consequence, several interfaces and functionalities differ significantly. It has, however, a similar overall structure and sets of threads, and about 20% of the hand-written source code is common. The automatically generated code is different, though its structure is very similar. Two major (non-consecutive) versions of this programs were analyzed. The first one is composed of 1.9 M lines, among which 155 K are hand-crafted. The second one is composed of 2.1 M lines, among which 160 K are hand-crafted. This case study was conducted in an industrial environment. The first version was analyzed by one avionics software engineer experienced in static analysis with Astrée and AstréeA, while the second was analyzed by a software engineer experienced with Astrée, but with no prior exposure to AstréeA. This case study benefited from the specialization work performed for the primary case study. The ARINC 653 model was reused with minor adaptation (about 8%) and 7 new API functions were modelled. Much of the analysis refinement effort consisted in adapting AstréeA directives from the first case study to the similar (but different) application software: 2178 directives were used for the first version, with limited adaptation for the second one (about 5%). Currently, our analyses exhibit 8573 alarms, i.e., a 99.56% selectivity, for the first version, and 10735 alarms, i.e., a 99.52% selectivity, for the second. The selectivity is again higher for automatically generated code (99.79% for both versions) than for hand-written code (96.88% for the first version, and 95.97% for the second). Analyzing these applications is 6 times slower and slightly less precise than our primary case study (Sec. 5.1.1), for a similar code size. Indeed, our first case study benefited from a larger code-specific specialization effort, which improved not only precision but also efficiency, by carefully selecting useful abstractions only. 5.2 Analysis of POSIX Applications AstréeA was extended to analyze a family of embedded avionics applications developed at Airbus and running under POSIX systems. The POSIX standard is more general and more complex than ARINC 653. However, Airbus applications rely only on a subset of POSIX similar to ARINC 653. This subset includes POSIX extensions such as Threads, Thread Execution Scheduling, Realtime, Message Passing, Semaphores, Timeouts and Timers. The applications use a variety of POSIX objects such as processes, threads, mutexes, condition variables, message queues, and semaphores. They also use sophisticated objects not directly related to parallelism, including regular files, named pipes, shared memories, and environment variables. Peculiarities of the semantics of some of the associated primitives make analysis in the large especially challenging. For instance, the \texttt{shm function returns a valid pointer upon successful completion, or \texttt{-1} in case of failure. To avoid false alarms, we use partitioning techniques which allow the analyzer to distinguish, by path sensitivity, between normal and error cases. To facilitate the analysis by non expert engineers in an industrial context, we chose to restrict the scope from complete POSIX applications to so-called subsets of them. These subsets exist independently from the needs of static analysis, as part of the (piece-wise) development strategy at Airbus, and are subject to software integration testing: an application is typically decomposed into 5 to 10 subsets. 5.2.1 Analysis of a DAL E application This case study was conducted in by an avionics software engineer experienced in static analysis, with extensive support from Astrée developers. It aimed at analyzing a subset of an avionics application developed at Airbus with low safety-criticality, composed of 300 K lines of hand-crafted C code. This application implements embedded system failure monitoring and correlation to optimize aircraft maintenance on ground. It is composed of 70 threads, some of which have the same priority, and sometimes the same entry point. It is a complex program performing intensive string processing, and traversing large arrays of structures by means of nested loops and pointer arithmetics. The case study focused on a subset of this application composed of 7 threads and 32 K lines of C. 790 lines of stubs were developed to abstract away the threads (or parts of threads) excluded from the analysis. Such stubs are non-deterministic C programs using Astrée directives. This work requires a precise knowledge of the design of the software, and can be inspired by existing simulations developed for integration testing purposes. **POSIX model.** As for ARINC 653, a library of stubs was developed to model the POSIX primitives used by the program to be analyzed. 1200 lines of C code and Astrée directives were written to model 45 API functions, among which 31 are related to multi-threading. These primitives handle threads, mutexes, condition variables, message queues, and semaphores, but also time, string management and I/Os. **Analysis results and refinement.** The first static analyses yielded about 1300 alarms, while not covering all accessible code. Therefore the model of the unanalyzed threads was refined, e.g., adding missing initialization for correctness, and tuning the sets of values that may be written for precision. Astrée provides useful indicators for data and control coverage, such as the set of unanalyzed control points and invariants reduced to singletons (i.e., variables deemed stuck to their initial values). ```c if (b==0) { access(&X); b=1; } if (b==1) { access(&X); b=0; } ``` **Figure 4: Mutual exclusions between threads with arbitrary priorities** The analyzed program was annotated with 197 Astrée directives. The subsequent analyses cover all the reachable control points, while yielding 865 alarms. **Remaining alarms.** Most of the remaining alarms are not related to parallelism but are caused, e.g., by complex string processing and pointer arithmetic. However, a concurrency-related source of imprecision is the use, in some cases, of \texttt{ad hoc} boolean flags playing the role of mutexes to implement critical sections. This non standard choice is motivated by performance or system configuration constraints. For instance, the algorithm of Fig. 4 synchronizes the access to \texttt{X} between two threads with arbitrary priorities. The first (resp. second) thread accesses \texttt{X} only if the boolean flag \texttt{b} is set to 0 (resp. 1), and then resets \texttt{b} to 1 (resp. 0). Atomic access is guaranteed by the size of data and the use of the \texttt{volatile} qualifier for the shared variables \texttt{b} and \texttt{X}. This synchronization mechanism is incorrect when considering weakly consistent memories, but it is correct for a real-time scheduler on a single mono-core processor, provided that complementary verification activities are conducted to ensure that the compiler does not suppress nor reorder volatile accesses. 5.2.2 Analysis of DAL C middleware The next use case was conducted in an industrial environment, by an intern with no prior exposure to static analysis. This use case considers a subset of a complex avionics software platform. The full platform features 11 privileged multithreaded POSIX processes, running on top of an embedded real-time operating system offering POSIX services to applications. The platform is composed of 400 K lines of C, more than 80% of which are hand-written. It implements a variety of communication protocols, as well as human-machine interface functions. The case study addresses the process in charge of interactive cockpit displays. It is composed of 33 K lines of C and 4 threads. The process reads a set of constant binary configuration files at start up, which must be taken into account to prove the correctness of the application. Hence, we model precisely a part of the file system service, including the constant file data. The library of stubs of POSIX primitives from case study 5.2.1 was also extended with 25 new system calls used by this process, including: named pipes (used to read inputs) and shared memories (to communicate with other privileged processes of the platform). Significant adaptations of the existing primitives were also necessary, as the underlying operating system implements a different version of POSIX, with different implementation choices. Altogether, 70 API functions were modelled in a POSIX stub library comprising 1000 lines of C with Astrée directives. The model of the environment of the analyzed process (including a model of the non-analyzed processes) is only a few tens of lines long and models asynchronous writes into pipes and shared memories. The program itself was annotated by means of 228 Astérée directives. Current analyses exhibit 932 alarms, i.e., a 94.5% selectivity. Ongoing experiments target other processes of the same avionics platform. 5.3 Analysis of an OS fragment Our last case study is an on-going project to analyze a part of an embedded operating system with Astérée. We focus on the component providing ARINC 653 entry points to applications. We analyze almost all of the component, including the underlying implementation of preemptive multithreading through a priority-driven scheduler and the various communication objects, timers, error handling mechanisms, but excluding machine-language context switching. Stubbing and modeling. The ARINC 653 implementation communicates to applications as they perform system calls. To achieve a stand-alone analysis of the implementation that takes into account all its execution contexts, we have written a 1.3 Kloc analysis driver. Firstly, it takes care of calling ARINC 653 initialization routines (this is normally performed as part of OS bootstrapping, which we do not analyze as it is in assembly). Secondly, it provides abstract configuration tables that declare an arbitrary number (up to the system limit) of ARINC 653 objects of arbitrary name and properties. Normally, the OS is compiled with a fixed set of partitions (conceptually similar to POSIX processes), each composed of a fixed set of threads, described by a table fixed at compilation. Instead of a fixed table, our analysis considers the set of all valid tables. Thirdly, it models an arbitrary application execution by issuing all possible sequences of ARINC 653 system calls with all possible argument values (within the range authorized by the specification). We use non-determinism extensively to achieve, in a single analysis, a sound coverage of a large number of execution environments. The result is an analysis that is sound for any multi-partition multi-thread application respecting the API contract. Note that, when analyzing ARINC 653 applications (Sec. 5.1), we explicitly checked API contracts by inserting into the OS stubs assertions that are verified by the analyzer. Thus, separate analyses of the OS and the applications ensure the safety of the whole system. An additional 150 line stub provides a C implementation for assembly functions. Most notably, threads and context switching are implemented in the OS by, respectively, allocating a stack for each thread and switching stacks, which cannot be expressed in C. In our model, we associate instead an Astérée thread to each stack and model stack switching as a non-deterministic wait allowing arbitrary threads to run. Handling concurrency is critical to model soundly all the cases where several threads and partitions access concurrently the ARINC 653 implementation. Results. An early experiment conducted in an academic setting achieved a 94.3% selectivity rate within 22 mn computation time and 900 MB memory consumption. The false alarms come from data-structures and program patterns never encountered by Astérée before, and for which it lacks dedicated abstract domains. We stress the fact that a better precision can be achieved, in the future, by specializing the analysis to this new class of software. This experiment validates the fact that sound static analyses checking significant parts of embedded operating systems are possible, and that Astérée provides a promising architecture to achieve it. 6. RELATED WORK The problem of verifying concurrent systems has been considered by formal methods for decades. Many works have been inspired by Jones’ rely–guarantee reasoning [15], primarily to design deductive methods, although Astérée’s static analysis applies similar principles to abstract interpretation instead. Deductive methods put a heavy burden on the user by requiring manual code annotations and, especially, providing, for each thread, a model of its environment (i.e., of the other threads). Moreover, deductive tools for concurrent programs are not as mature as for sequential programs. Sequential deductive methods are used at Airbus to replace unit-testing, as interfaces must be developed for such tests anyway. Model checking can also handle concurrent programs, but suffers from state (in explicit state methods) or path (in SAT-based methods) explosion problems, that have only been partially addressed by partial-order reduction methods. Hence, popular software model checkers often explore only a part of software behaviors (in bounded or context bounded model checking), which is useful to find errors but cannot serve for verification. An application to the model-checking of ARINC 653 software is reported in [21]. Static analysis by abstract interpretation allows, on the other hand, designing sound and automated tools, making it well suited for industrial use, provided that scalability and precision objectives can be reached. Apart from the work on Astérée and Astérée discussed at length in this article, thread-modular static analysis has been considered in other works. The Goblint analyzer [22] focuses only on detecting data-races. The POSIX Thread analysis of Carré and Hyman [7] uses non-relational abstractions, which limits its precision. We refer the reader to [9] for an in-depth comparison of various formal methods and static analysis techniques. Dynamic analyzers, such as Valgrind, are also popular tools, but have a different purpose: they can be used for testing and debugging but, as they are not sound, cannot be employed to gain certification credit and replace less cost-effective verification methods. 7. CONCLUSION The use of formal methods in embedded avionics software is now sanctioned by Certification Authorities, in the DO-178C and DO-333 international standards. They can thus complement or replace some reviews, intellectual analyses and tests required for certification. Formal methods are particularly satisfying as they provide strong guarantees based on rigorous mathematical theories. Their use is however subject to the availability of tool sets that both comply with DO-333 requirements and are cost-effective in an industrial context. Sound static analysis tools present a compelling choice: for instance, the Astérée industrial analyzer is being used at Airbus to help verifying non-functional safety properties of large sequential C codes. For concurrent software, however, no industrial tool exists. We have discussed Astérée, a prototype extension of Astérée to concurrent C code, presented several case studies analyzing a variety of concurrent avionics software, and reported promising experimental results. A key factor is that 4 out of our 6 studies were suc- cessfully conducted by engineers in an industrial context, which shows that our tool is mature and cost effective. **Future work.** For static analysis of concurrent software to be fully exploitable for certification in avionics industries, two lines of work need to be conducted. It is necessary to improve the precision, automation, and generality of current analyzers. Reducing the number of false alarms is critical because, for certification, each alarm must be proved to be spurious by other, more costly means (such as code review). In the limited scope of synchronous fly-by-wire, Astrée was able to achieve the 0 false alarm goal on specific code. For more complex, concurrent, less critical software, a more modest objective is acceptable, i.e.: one alarm for every 500 lines in hand-written code, and one alarm for 10000 lines of automatically generated code (which is almost but not quite reached in our case studies). Furthermore, the results reported here were achieved at the cost of a significant manual parameterization of the analysis. The automated parameterization techniques from Astrée must be adapted to the codes targeted by AstréeA to achieve a full automation. Finally, models of the underlying operating systems are necessary to conduct a sound analysis. Currently, only ARINC 653 and a subset of POSIX Threads are supported. Future work will consider additional models, including larger subsets of POSIX, alternate operating systems, or implementation-specific variants. The analysis must also be integrated into industrial processes. A first step, which is underway, is to transfer Astrée to a suitable tool provider company, able to provide support and extensions to industrial end-users. The analyzer then needs to be qualified by the industrial end-user, in the context of avionics software products to be certified. After this step, the source-level analysis of AstréeA will be usable to automate reviews and analyses of source code required by DO-178. To go further and alleviate robustness tests, DO-333 requires that a proof of soundness with respect to the binary is provided. An attractive solution on sequential software is to use a certified C compiler, such as CompCert [3], that ensures that a proof performed on the C source is also valid on the binary. For AstréeA, the proof of equivalence between source and binary must be extended to concurrent programs. Additionally, we must make sure that CompCert and AstréeA have equivalent formal notions of the semantics of C programs. We believe that, in the near future, certification objectives using formal methods can be achieved on concurrent avionics software. We are confident that sound static analysis, and in particular AstréeA, can also benefit other industries employing concurrent embedded software with similarly stringent certification processes. 8. REFERENCES
{"Source-Url": "https://www-apr.lip6.fr/~mine/publi/article-mine-delmas-emsoft15.pdf", "len_cl100k_base": 11844, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32850, "total-output-tokens": 13906, "length": "2e13", "weborganizer": {"__label__adult": 0.000469207763671875, "__label__art_design": 0.0003190040588378906, "__label__crime_law": 0.0004925727844238281, "__label__education_jobs": 0.0004968643188476562, "__label__entertainment": 7.224082946777344e-05, "__label__fashion_beauty": 0.00019550323486328125, "__label__finance_business": 0.0002532005310058594, "__label__food_dining": 0.0004067420959472656, "__label__games": 0.0009140968322753906, "__label__hardware": 0.0032863616943359375, "__label__health": 0.0004525184631347656, "__label__history": 0.0003070831298828125, "__label__home_hobbies": 0.00012814998626708984, "__label__industrial": 0.001255035400390625, "__label__literature": 0.0002231597900390625, "__label__politics": 0.0003147125244140625, "__label__religion": 0.0005745887756347656, "__label__science_tech": 0.05584716796875, "__label__social_life": 7.557868957519531e-05, "__label__software": 0.008331298828125, "__label__software_dev": 0.9228515625, "__label__sports_fitness": 0.00035572052001953125, "__label__transportation": 0.0021114349365234375, "__label__travel": 0.00023758411407470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63687, 0.03198]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63687, 0.40523]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63687, 0.91614]], "google_gemma-3-12b-it_contains_pii": [[0, 4767, false], [4767, 11742, null], [11742, 18848, null], [18848, 25497, null], [25497, 32495, null], [32495, 38588, null], [38588, 44047, null], [44047, 50638, null], [50638, 57405, null], [57405, 63687, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4767, true], [4767, 11742, null], [11742, 18848, null], [18848, 25497, null], [25497, 32495, null], [32495, 38588, null], [38588, 44047, null], [44047, 50638, null], [50638, 57405, null], [57405, 63687, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63687, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63687, null]], "pdf_page_numbers": [[0, 4767, 1], [4767, 11742, 2], [11742, 18848, 3], [18848, 25497, 4], [25497, 32495, 5], [32495, 38588, 6], [38588, 44047, 7], [44047, 50638, 8], [50638, 57405, 9], [57405, 63687, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63687, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
1f35171729ff954eceeb23bb51f4fb607144acba
[REMOVED]
{"Source-Url": "https://static-curis.ku.dk/portal/files/239958957/Conf2020.pdf", "len_cl100k_base": 12718, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 65453, "total-output-tokens": 16140, "length": "2e13", "weborganizer": {"__label__adult": 0.0005440711975097656, "__label__art_design": 0.0005512237548828125, "__label__crime_law": 0.0007443428039550781, "__label__education_jobs": 0.000881195068359375, "__label__entertainment": 0.00010848045349121094, "__label__fashion_beauty": 0.00023829936981201172, "__label__finance_business": 0.0021076202392578125, "__label__food_dining": 0.0005445480346679688, "__label__games": 0.000766754150390625, "__label__hardware": 0.0009870529174804688, "__label__health": 0.000972747802734375, "__label__history": 0.0003609657287597656, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.0008249282836914062, "__label__literature": 0.0004901885986328125, "__label__politics": 0.0005540847778320312, "__label__religion": 0.0005402565002441406, "__label__science_tech": 0.07354736328125, "__label__social_life": 0.00012195110321044922, "__label__software": 0.006763458251953125, "__label__software_dev": 0.90673828125, "__label__sports_fitness": 0.0003323554992675781, "__label__transportation": 0.0008635520935058594, "__label__travel": 0.0002522468566894531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57776, 0.02987]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57776, 0.4158]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57776, 0.83809]], "google_gemma-3-12b-it_contains_pii": [[0, 454, false], [454, 3394, null], [3394, 6542, null], [6542, 10062, null], [10062, 12768, null], [12768, 15245, null], [15245, 18796, null], [18796, 22551, null], [22551, 25999, null], [25999, 28172, null], [28172, 31724, null], [31724, 34912, null], [34912, 37666, null], [37666, 40236, null], [40236, 43340, null], [43340, 47060, null], [47060, 51019, null], [51019, 54382, null], [54382, 57776, null]], "google_gemma-3-12b-it_is_public_document": [[0, 454, true], [454, 3394, null], [3394, 6542, null], [6542, 10062, null], [10062, 12768, null], [12768, 15245, null], [15245, 18796, null], [18796, 22551, null], [22551, 25999, null], [25999, 28172, null], [28172, 31724, null], [31724, 34912, null], [34912, 37666, null], [37666, 40236, null], [40236, 43340, null], [43340, 47060, null], [47060, 51019, null], [51019, 54382, null], [54382, 57776, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57776, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57776, null]], "pdf_page_numbers": [[0, 454, 1], [454, 3394, 2], [3394, 6542, 3], [6542, 10062, 4], [10062, 12768, 5], [12768, 15245, 6], [15245, 18796, 7], [18796, 22551, 8], [22551, 25999, 9], [25999, 28172, 10], [28172, 31724, 11], [31724, 34912, 12], [34912, 37666, 13], [37666, 40236, 14], [40236, 43340, 15], [43340, 47060, 16], [47060, 51019, 17], [51019, 54382, 18], [54382, 57776, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57776, 0.0]]}
olmocr_science_pdfs
2024-12-04
2024-12-04
728437bcf28ff4b3eaccc2aee8e60301b89e215d
GMSH GUIDE FOR MESH GENERATION Authors: Manuel Carmona José María Gómez José Bosch Manel López December/2014 Barcelona, Spain. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License http://creativecommons.org/licenses/by-nc-nd/3.0 Table of contents I. Introduction to GMSH ........................................................................................................... 4 II. Introduction to GMSH Commands ..................................................................................... 5 III. Commands for geometry generation ............................................................................... 7 IV. Commands for meshing the geometry .............................................................................. 10 V. Examples ............................................................................................................................. 13 V.1. Example 1: Membrane ................................................................................................. 13 V.2. Example 2: Red Blood Cell .......................................................................................... 15 V.3. Example 3: Straight artery with boundary layer ......................................................... 17 VI. List of commands ............................................................................................................... 20 VII. GMSH graphics interface (GUI) ................................................................................... 21 I. Introduction to GMSH GMSH [1] is a freeware software distributed under the terms of the GNU General Public License (GPL). It is mainly a mesh generator, provided with a CAD (Computer-Aided Design) engine and a post-processor tool. Although having a graphical interface, one of its strong points is that it accepts a parametric input script for the mesh generation. As an inconvenient, it is not good when trying to define complicated geometries (for example, nowadays it is still no able to perform geometric boolean operations, which are generally quite useful). Indeed, for complex geometries, the preferred method is to use an external CAD software for generating the geometry and afterwards importing it into GMSH. As a general procedure for mesh generators (and also for GMSH), first we have to generate geometrical entities (points, lines, areas and volumes; in this order). In GMSH there are also physical groups, that are simply and basically a group of geometrical entities (points, lines, surfaces or volumes). Once the geometry is defined, we have to specify (as far as possible) how the FEM elements have to be generated. And finally, we order to mesh them (i.e., generation of the FE nodes and elements). With GMSH we can generate some simple geometries. They are currently improving this aspect, but at the time of this publication, only some limited geometry operations are possible. FEM models can be generated graphically (through the GUI of GMSH), or by use of scripts (a file containing the instructions for the mesh generation). GMSH scripts have the extension '.geo'. In these scripts, we can define all the instructions than can also be input graphically. The use of these scripts has many advantages: - We can re-use it for other models. - We can parameterize the model, in order to be able to change dimensions or properties without the need to start from zero. - Portability is better. The script file occupies some kbytes, while a model can be quite large to back it up. The only disadvantage is that it has to be re-run to get the model. In this guide, it will be explained the language and options for creating scripts for GMSH. This guide does not intend to substitute the GMSH guide provided with the software. It just intends to explain the basics of GMSH scripts generation, illustrating them with examples for a better understanding. The use of the GUI should be quite straightforward, once we know what every option means. Moreover, the GUI interface will be better developed at lab sessions. II. Introduction to GMSH Commands First of all, we have to take into account that some of the syntax is similar to the C language (and therefore, also to Java). This also means that, generally, commands are ended with a semicolon (';') and that expressions are case sensitive. Also, indexes (for vectors, for example) will start from 0. Comments follow the same format than in the C language: - // for a line comment. - /* ... */ for a multiple-lines comment. Regarding the types of variables, we can only find two different types: real and string. Their syntax is also similar to the C language. We can highlight some keypoints: - #vector[] provides the number of elements in the vector (or list). - To ask the user for a real value (and having also a default value), we can use the function `GetValue`. Example: \[ lc = \text{GetValue}("Enter the characteristic length for the mesh: ", \text{default\_value}); \] - Similarly, to ask the user for a string, we can use the function `GetValue`. Example: We can also highlight: - The instruction `Point {Point_id}` provides the coordinates of a given point. - `Point "*"` provides the id's of all the point entities. The same for `Line`, `Surface` and `Volume`. - For obtaining the last number+1 associated to an entity, we can use the commands: `newp` (point), `newl` (line), `newc` (curve), `news` (surface), `newv` (volume), `newll` (line loop), `newsl` (surface loop), `newreg` (any entities different than point). Different expressions can be given separated by commas, and grouped by '{ }'. There are some functions for strings: `StrCat` (concatenate two strings), `Sprintf` (creates a string like 'printf' would print in the C function), `GetString` (ask for a string to the user). For printing in the text windows, we can use the command `Printf`, which have a similar format to C. Example: `Printf("Length: %g, Width: %g, Thickness: %g")", L,W,t);` Operators are like in C, with '^' for exponentiation. It allows the ternary operator ': ?' (reduced if). Built-in functions are: `Acos` (returns a value between 0 and Pi), `Asin`, `Atan`, `Atan2`, `Ceil`, `Cos`, `Cosh`, `Exp`, `Fabs`, `Fmod`, `Floor`, `Hypot`, `Log`, `Log10`, `Modulo`, `Rand`, `Sqrt`, `Sin`, `Sinh`, `Tan`, `Tanh`. We can define functions, but they are like macros, with no arguments. At the place where it is called, it is literally substituted by the function definition. They are defined with: \[ \begin{align*} \text{Function name} & \\ \text{Body of the function} & \\ \text{Return} & \end{align*} \] --- Departament d'Electronicà Universitat de Barcelona 5 These functions are called with the function Call: Call name; `:` is used to obtain a range of values with a specific increment (default to 1). We can also build for-loops. There are two possible implementations: - **For (expression : expression : expression)**: The last expression means the increment, and it is optional. - **For variable In {expression : expression : expression}**: In this case, string gets the values of the list after In. Both for-loops finish with EndFor. And finally, we can also build the conditional 'if': If (expression) ...... EndIf There is no Else in GMSH. To start a void list: string = {}: III. Commands for geometry generation Geometries are generated by creating first points, then lines, afterwards surfaces and, finally volumes. These are called elementary geometries. Each one has assigned an identifier (a number) at the moment they are created. Groups of elementary geometries can be formed. They are called physical entities. They have also an identifier. Nevertheless, they cannot be modified by geometry commands. For later use in Elmer, it is important to define physical entities. They will be numbered accordingly. For generating surfaces, first we have to generate what is called 'line loops' in GMSH. They are simply a number of lines that form a closed path (and therefore, they delimitate an area). Let’s see examples of how these entities are generated. Points: Example: \textit{Point}(1)={x, y, z, lc}; Point with id 1 is created at position \(x, y, z\). \(lc\) is an optional parameter, indicating the size of the elements near this point when the elements are generated. Example for a group of points: \textit{Physical Point}(5)={1, 2, 3, 4}; Lines: There are some different types of lines. The most basic is the straight line (Line). Example: \textit{Line}(1)={1, 2}; A straight line with id 1 is created between points 1 and 2. A direction is also defined from point 1 to point 2 in this case. This is considered the positive direction of the line. For creating surfaces, we need to create previously a line loop. Example: \textit{Line Loop}(5)={1, 2, -3, -4}; In this case a line loop with id 1 is created, and it is defined by lines with ids 1, 2, 3 and 4. The line loop follows a direction that has to coincide with the indicated line directions. If one line has opposite direction, a negative value for the line has to be indicated. This also defines a direction of the surface. Other commands for lines are: \textit{Circle}, \textit{Ellipse}, \textit{Spline}, \textit{BSpline}, \textit{Compound Line}, \textit{Physical Line}. Areas (or Surfaces): Example: \textit{Plane Surface}(1)={2, 3}; A plane surface with id 1 is created by using the line loops 2 and 3. The first line loop defines the external boundaries of the surface. The rest of line loops define holes in this area. For creating volumes, we have to create surface loops. They have to define a closed volume and with an appropriate orientation of the surfaces. Example: Surface Loop(1)={1,-2,3,-4,5,-6}; Other commands related with areas are: Ruled Surface, Compound Surface, Physical Surface. Volumes: Example: Volume(1)={1,2}; A volume with id 1 is created, based on surface loops 1 and 2. As in surfaces, the first surface loop defines the external boundaries of the volume, and the rest of surface loops define holes. Other commands are: Compound Volume, Physical Volume. Geometrical entities can also be generated from already existing entities. There are two of these processes implemented in GMSH: extrusion and transformations: Extrusion: This is a very useful method to create geometries and, at the same time, keep a "nice" mesh. By dragging an entity of lower level, we can generate an entity. For example, dragging a point a certain distance in a certain direction defines a line, a line defines a surface and surface defines a volume. This is valid not only for translations, but also for rotations. Moreover, we can also generate at the same time the mesh of this new generated geometry. Example: sout[]=Extrude[tx,ty,tz],[rx,ry,rz],[px,py,pz],angle) {Line {1,2};}; The last expression provides the entities that are going to be extruded; in this case, lines 1 and 2 (therefore, we are going to generate surfaces). \{tx,ty,tz\} indicates the translation vector (direction and magnitude of translation), \{rx,ry,rz\} indicates the axis of rotation, \{px,py,pz\} is a point in this axis of rotation and angle is the magnitude of rotation (in radians). If we only want a translation, we only have to take the rest of parameters out of the expression. The same reasoning is valid for only rotation. This extrusion command returns a list (vector) of id's. The first element is the entity of the same level than the original entity generated at the end of the extrusion. The second element is the higher level entity that has been extruded. The other elements are the rest of entities (as a general rule, they follow the order of the elements from the original entity used to extrude it). If more than one entity is extruded at a time, the items are repeated in the same way. Transformations: There are some operations, like scaling, that can be also applied to existing entities (or a copy of them → "Duplicata" command) for generating new ones. The operations that we have in GMSH are: Translate: We just need to provide the translation vector (which defines the direction and the magnitude of translation) and the entities to be translated. Example: \texttt{Translate \{-0.01, 0, 0\} \{Point\{1\}\}} Rotation: Like for extrusions, we have to provide a vector defining the rotation axis, a point on the rotation axis and a rotation angle (in radians). Example: \texttt{Rotate \{\{1,0,0\}, \{0,0,0\}, \pi/2\} \{Surface\{1,2,3\}\}} Symmetry: We just have to provide the coefficients of the equation defining the symmetry plane ($A \cdot x + B \cdot y + C \cdot z + D = 0$) and the entities to be used. Example: \texttt{Symmetry\{1,0,0,0\} [Duplicata\{Surface\{1\}\}]}: This command generates symmetry copy of the surface 1 with respect to the YZ plane, which passes through the point (0,0,0). As a reminder, the coefficients $A$, $B$ and $C$ coincides with the components of a normal vector to the plane. $D$ can obtained with the coordinates of a point in the plane. Dilatation: Dilatation (or compression) by homothetic transformation (like an image projection from a light source). It requires the position of the transformation point (equivalent to the position of the light source) and a factor (meaning how relatively far the projection is with respect to the distance from object to the light source). Example: \texttt{Dilate \{\{0,0,0\}, 5\} [Duplicata \{Surface\{1,2\}\}]}. Creates an bigger version of surfaces 1 and 2, by using the point at (0,0,0) and dilatation factor 5. For copying also the mesh when making a duplicate of a geometry entity, we should set the option \texttt{Geometry.CopyMeshingMethod}. IV. Commands for meshing the geometry After having defined a geometry, we should specify how the mesh has to be generated. We have different tools for this purpose in GMSH. First of all, GMSH has three meshing algorithms: MeshAdapt, Delaunay and Frontal. In principle we do not have to worry on this, as GMSH uses a default algorithm depending on the geometry. Nevertheless, we can force GMSH to use a specific algorithm. As a general rule, MeshAdapt is good for complex surfaces, Frontal for obtaining good quality meshes and Delaunay is a fast algorithm (also appropriate for large meshes). As it is general for meshes, it is distinguished between structured meshes and unstructured meshes. There is also the possibility of having the combination of both, called hybrid. Structured meshes are "ordered" (or "regular") meshes, while unstructured meshes is the opposite of the structured meshes. A simple example is illustrated in Figure 1. For generating structured meshes, we have basically two different methods: by extrusion of the geometry (together with the mesh) and the transfinite method (together with 'recombine'). Additionally, all meshes can be subdivided for generating all-quadrangular or all-hexahedra meshes by using the Mesh.SubdivisionAlgorithm option (0=none, 1=all quadrangles, 2=all hexahedra). Therefore, mesh commands are issued for two purposes: defining the size of the elements and defining parameters for the structured mesh. The size of the elements can be specified in three different ways: - When creating points (as already seen before). This is the default option. In this case, the option Mesh.CharacteristicLengthFromPoints is set. It uses interpolation for the different points for creating the initial mesh. The characteristic element length for points can also be given with the command: Characteristic Length [list of points] = lc; - If Mesh.CharacteristicLengthFromCurvature is set, the mesh is generated depending on the curvature of the lines. - Specifying mesh size fields. Fields are values depending at least on position for indicating distributions of mesh densities. These fields are created with: Field[id number]=type; (type is the string defining the field and id is a number that we associate to this field). To modify... the different options for this field, it is used: Field[id number].option=value; To specify which field will be used, there is the command: Background Field=id number; Some of these fields are: - **Attractor**: Used to calculate at each location the minimum distance to the given points and/or lines and/or surfaces. Other fields can use this field as input. Options: NodesList, EdgesList, FacesList, NNodesByEdge, FieldX, FieldY, FieldZ. - **Box**: Provides a value (VIn) inside a region (box) and another value (VOut) outside this region. Options: VIn, VOut, XMax, XMin, YMax, YMin, ZMax, ZMin. - **Threshold**: Based on the distances calculated from other fields (IField) (like Attractor), determines a size of elements (LeMin) for distances under a given value (DistMin), another size (LeMax) for distance over another value (DistMax), and interpolated sizes in-between these distances. Options: IField, DistMax, DistMin, LeMax, LeMin, Sigmoid, StopAtDistMax. - **MathEval**: Sizes depends on a mathematical function (F) that can depend on position variables (x, y, z), fields values (F1, F2, ...) and mathematical functions. Options: F. - **BoundaryLayer**: In fluidics, meshes are usually generated quite fine near walls, and less fine far from walls and in a very regular way. This fields provides values that are increasing as we are far from specified geometric entities. It follows the relation: \[ h_{\text{wall}} \cdot \text{ratio}^{h_{\text{wall}}} \text{dist} \] Options: hwall_n, hwall_t, ratio, thickness, EdgesList, FacesList, NodesList, Quads, hfar, FanNodesList, FansList, AnisoMax, IntersectMetrics. - **Min**: Provides the minimum value of a list of fields. Options: FieldsList. - **PostView**: Element sizes at a location will be determined by the 'solution' values given on nodes (of a certain 'background mesh') near this location. Options: CropNegativeValues, IView. - **Others**: Restrict, Max, Mean, CenterLine, Gradient, Structured, Param, Curvature, Sphere, Cylinder, Frustum, AttractorAnisoCurve, MathEvalAniso, MinAniso, IntersectAniso, Laplacian, MaxEigenHessian, LonLat, etc. All three methods can be used at the same time. In this case, the minimum value (the most restrictive) of them is used. We can highlight some of the GMSH mesh options that can be defined (some of them already mentioned and most of them are self-explainative): - `Mesh.Algorithm`. - `Mesh.CharacteristicLengthFromPoints`. - `Mesh.CharacteristicLengthFromCurvature`. - `Mesh.CharacteristicLengthExtendFromBoundary`: By default, the element sizes indicated in boundaries are then "extrapolated" to entities they define. - `Mesh.CharacteristicLengthFactor`: Apply a factor to the characteristic lengths of the mesh. - **Mesh.CharacteristicLengthMin**: Forces a minimum. - **Mesh.CharacteristicLengthMax**: Forces a maximum. - **Mesh.Optimize**. - **Mesh.RecombineAll**. - **Mesh.ToleranceEdgeLength**. - **Mesh.LcIntegrationPrecision**. - **Mesh.ColorCarousel**: Specifies to what is related the color of elements. 0 for element types, 1 for elementary entities, 2 for physical entities and 3 for partitions. - **Mesh.Color.Zero**. For defining structured meshes, we have the following possibilities: **Extrude**: It works like for geometries, but additionally we have to specify, after the entities list the command **Layers** or **Recombine**. **Layers** specify the number of elements to be generated during the translation. More than one extrusions can be specified at the same time, but in this case also a list of thicknesses for each layer has to be provided. The **Recombine** command allows to generate quadrangles for 2D and prisms (or hexahedra or pyramids) in 3D. Example: `Extrude {0,0,1} {Surface{1}; Layers{ {4,1}, {0.5,0.25} }; }` **Transfinite interpolation**: This is used when a non-uniform distribution of element sizes are wanted at the boundary entities. This interpolation adjust the element sizes depending on an function depending on the coordinates. Example for a line: `Transfinite Line{1,-3}=10 Using Progression 2;` In this case, lines 1 and 3 will be meshed with 10 nodes and the nodes are distributed spatially in a geometrical progression of 2 (following the direction of the line). We can also use "Using Bump 2" in order to refine the mesh at both end points of the line. It is similar for surfaces and volumes, but without the right expression for progression. Additionally, we can also perform the following operations related to meshing: - **In Surface**: Embed points and lines in surfaces, so that the mesh of the surface will conform to the mesh of the points and/or lines. - **Periodical**: Force the mesh of one line or surface to match the mesh on other line or surface. - **Reverse Line**: Change the direction of a line. V. Examples V.1. Example 1: Membrane The first example shows one problem with many of the features that can be used in GMSH. This example is a membrane, with different regular rectangular regions. The geo file has been generated so that we can change the dimensions and the number of divisions without having to make again the file. We only have to change the vectors \( xs \) and \( ys \): \[ //**************** //** Parameters ** //**************** Lm=GetValue("Length of the membrane: ", 1e-3); //Membrane Length Wm=GetValue("Width of the membrane: ", 0.5e-3); //Membrane Width lc=Wm/15; //Default characteristic length \] //Lists for the different rectangular positions on the membrane for direction x and y \( xs[] = \{0,Lm/8,Lm/4,Lm/2,Lm\} \); \( ys[] = \{0,Wm/8,Wm/4,Wm/2,Wm\} \); \( nxs=\#xs[] \); //Number of elements in xs \( nys=\#ys[] \); //Number of elements in ys //**************** //** Geometry ** //**************** //Points for the membrane \( p0=newp; \) //1 \( p=p0; \) \( For\ \text{indys}\ \text{In} \{0:nys-1\} \) \( For\ \text{indxs}\ \text{In} \{0:nxs-1\} \) \( \quad \text{Point}(p)=\{xs[indxs],ys[indys],0,lc\}; \) \( \quad p=newp; \) //p=p+1 \( EndFor \) \( EndFor \) //Horizontal lines \( l0=newl; \) //1 \( l=l0; \) \( For\ \text{indys}\ \text{In} \{0:nys-1\} \) For indxs In {0:nxs-2} Line(l)={p0+indys*nxs+indxs,p0+indys*nxs+indxs+1}; l=newl; //l=l+1 EndFor EndFor //Vertical lines For indys In {0:nys-2} For indxs In {0:nxs-1} Line(l)={p0+indys*nxs+indxs,p0+(indys+1)*nxs+indxs}; l=newl; EndFor EndFor //Line Loops ll=newll; //1 For indys In {0:nys-2} For indxs In {0:nxs-2} Line Loop(ll)={l0+indxs+indys*(nxs-1),l0+(nxs-1)*nys+indys*nxs+indxs+1,- (l0+indxs+(indys+1)*(nxs-1)),-(l0+(nxs-1)*nys+indys*nxs+indxs)}; Plane Surface(ll)=[ll]; //Recombine Surface {ll}; ll=newll; //ll=ll+1 EndFor EndFor //************ //** Mesh ** //************ ss[] = Surface "**"; //All surfaces that have been generated //Producing square elements mesh Transfinite Surface {ss[]}; Recombine Surface {ss[]}; //Generate the volume of the membrane Extrude {0,0,10e-6} {Surface {ss[]}; Layers {3}; Recombine;} The resulting mesh for this case is shown in Figure 2 and Figure 3. V.2. **Example 2: Red Blood Cell** This example illustrates one possible option for generating a model of a 3D red blood cell (RBC). It is also generated as a parametric model. ```plaintext //*************** //** Parameters ** //*************** R=10e-6; // Radius of border of the RBC Lcent=100e-6; // Length of the central part of the RBC depr=R/4; //magnitude of the depression of central part of the RBC th=R/5; //Thickness of the cell lcm=R/8; //Characteristic length of the mesh //*************** ``` //** Geometry ** //************** //First iteration for outer lines, second for inner lines For inout In [0:1] np=newp; //Left circle Point(np)={0,-R+inout*th,0,lcm}; Point(np+1)={0,0,0,lcm}; Point(np+2)={0,R-inout*th,0,lcm}; Point(np+3)={-(R-inout*th),0,0,lcm}; nl=newl; Circle(nl)={3+(np-1),2+(np-1),4+(np-1)}; Circle(nl+1)={4+(np-1),2+(np-1),1+(np-1)}; //Top central line (with Spline) Point(np+4)={Lcent/4,R-inout*th*0.7-depr,0,lcm}; Point(np+5)={Lcent/2,R-inout*th-depr,0,lcm}; Spline(nl+2)={3+(np-1),5+(np-1),6+(np-1)}; //Bottom central line Point(np+6)={Lcent/4,-(R-inout*th*0.7-depr),0,lcm}; Point(np+7)={Lcent/2,-(R-inout*th-depr),0,lcm}; Spline(nl+3)={1+(np-1),7+(np-1),8+(np-1)}; EndFor Line(nl+5)={6,np+5}; Line(nl+6)={np+7,8}; Line Loop(1)={-2,-1,3,nl+5,-(nl+2),nl,nl+1,nl+3,nl+6,-4}; Plane Surface(1)={1}; ************ //** Mesh ** ************ //Number of divisions on the circular lines Transfinite Line {1,2}=Pi*R/(2*lcm); Transfinite Line {nl,nl+1}=Pi*R/(2*lcm); Recombine Surface {1}; //Extrusions by 90° each ss[]=Extrude {{0,1,0},{Lcent/2,0,0},Pi/2} {Surface {1}; Layers{10}; Recombine;}; Printf("Generated first top area = %g", ss[0]); ss1[]=Extrude {{0,1,0},{Lcent/2,0,0},Pi/2} {Surface {ss[0]}; Layers{10}; Recombine;}; Printf("Generated second top area = %g", ss1[0]); ss2[]=Extrude {{0,1,0},{Lcent/2,0,0},Pi/2} {Surface {ss1[0]}; Layers{10}; Recombine;}; Printf("Generated third top area = %g", ss2[0]); The model generated by this script is shown in Figure 4 and Figure 5. ![Figure 4. Mesh for the whole red blood cell.](image1) ![Figure 5. Cross section view of the red blood cell.](image2) V.3. **Example 3: Straight artery with boundary layer** This example illustrates the use of Fields to control the mesh generation. We will use the BoundaryLayer Field in a straight artery model, supposedly to be used in a fluidic simulation. In this case, the most simple way to do it would be to use ‘Using Progression’ when generating lines. Nevertheless, we will do it by using Fields to illustrate its use. Nevertheless, this Field is specially "tricky" and some trial-and-error cycles have to be done. Additionally, the effect of some of its parameters are not very clear. As an example, the command ‘BoundaryLayer Field’ cannot be found in the GMSH user’s guide. //**************** //*** Parameters *** //**************** rcyli=1e-3; //Inner radius of the cylinder hm=0.4e-3; //Thickness of membrane Lcyl=10e-3; //Length of the artery //************** //* Geometry * //************** //Points defining the cylinders; To avoid 'interactions' with Fields, we do not define lc at points For val In {1:4} Point(2*val-1)={rcyli*Cos((val-1)*Pi/2),rcyli*Sin((val-1)*Pi/2),0}; Point(2*val)={(rcyli+hm)*Cos((val-1)*Pi/2),(rcyli+hm)*Sin((val-1)*Pi/2),0}; EndFor //Center Point np=newp; Point(np)={0,0,0}; //Lines of the cylinders lsci={}; lsce={}; For val In {1:3} Circle(2*val-1)={2*val-1, np, 2*val+1}; lsci[] +={2*val-1}; Circle(2*val)={2*val, np, 2*val+2}; lsce[] +={2*val}; EndFor //val is increased at the end of the loop Circle(2*val-1)={2*val-1, np, 1}; lsci[] +={2*val-1}; Circle(2*val)={2*val, np, 2}; lsce[] +={2*val}; //Surfaces Line Loop(1)={lsci[]}; Plane Surface(1)={1}; Line Loop(2)={lsce[]}; Plane Surface(2)={2,1}; //************** //*** Mesh definition *** //************** //For speeding up the mesh calculations Mesh.LcIntegrationPrecision=1e-3; //To avoid a too high mesh density inside the cylinder Mesh.CharacteristicLengthExtendFromBoundary = 0; Mesh.CharacteristicLengthFromPoints=1; Mesh.CharacteristicLengthFromCurvature=1; //BoundaryLayer Field definition Field[1]=BoundaryLayer; Field[1].ratio=1.2; Field[1].hwall_n=hm/20; Field[1].hwall_t=hm/5; Field[1].FacesList = {2}; //Avoids this field in Surface 2? Field[1].EdgesList = {lsci[]}; Field[1].Quads=1; Field[1].thickness=hm/2.5; Field[1].hfar=hm/5; //We define both settings as they do not work properly individually? Background Field = 1; BoundaryLayer Field = 1; //Create square elements from triangular Recombine Surface "*"; //Generate the body of the artery surfs[]=Surface "*"; Extrude {0,0,Lcyl} {Surface{surfs[]}; Layers{15}; Recombine;} Mesh.SurfaceEdges=1; Mesh.SurfaceFaces=1; The model generated by this script is shown in Figure 6. Figure 6. Mesh for the whole artery. ## VI. List of commands <table> <thead> <tr> <th>Geometry:</th> <th>Mesh:</th> <th>Flux Control:</th> <th>Others:</th> </tr> </thead> <tbody> <tr> <td>Point(id)={x,y,z,lc};</td> <td>Extrude{{tx,ty,tz},{rx,ry,rz},{px,py,pz},angle}{entities; layers;};</td> <td>Function name; Body; Return; Call name;</td> <td>newp, newl, newc, news, newv, newll, newsl, newreg; new.*;</td> </tr> <tr> <td>Line(id)={points ids};</td> <td>Translate{vector}{entities};</td> <td>For (expression:expression:expression) For var In (expression:expression:expression) EndFor; If (expression) EndIf;</td> <td></td> </tr> <tr> <td>Plane Surface(id)={surfaces ids};</td> <td>Rotate{{vector},{point},angle}{entities};</td> <td>var=GetValue(string_to_show, default_value); var=GetString(string_to_show, default_value); Punt; Puntf; Pext[]; w.*;</td> <td></td> </tr> <tr> <td>Surface Loop(id)={surfaces ids};</td> <td>Dilate [{point coords}, factor</td> <td>entities ]; Duplicata{entities};</td> <td></td> </tr> <tr> <td>Volume(id)={surface loops ids};</td> <td>newp, newl, newc, news, newv, newll, newsl, newreg;</td> <td></td> <td></td> </tr> </tbody> </table> ### Geometry: - **Mesh**: - Extrude - Translate - Rotate - Dilate - Duplicata ### Flux Control: - Function - Body - Return - Call ### Others: - newp, newl, newc, news, newv, newll, newsl, newreg - new.* VII. GMSH graphics interface (GUI) In this chapter, we will show how the GMSH GUI works. We will use the last version of GMSH at the moment where this publication was elaborated, i.e. version 2.8.3. The first recommended action to perform is to indicate the name for the model that we will created. This will define the name of the file where the commands will be saved as we are working in the GUI creating the model. For doing this, just execute File → New. The main window of GMSH is shown in Figure 7. ![Figure 7. Main window of GMSH.](image) The menu bar is used for opening/saving files, changing some GMSH options and setting some graphics actions. The different submenus in this bar are shown in Figure 8. ![Figure 8. Submenus in the menu bar.](image) We can highlight some of these options: - **File → New**: We can use this option to delete all and start from 0 the model (by overwriting the geo file). - **File → Open**: For opening/executing geo or msh files. - **Tools → Options**: For setting all options of GMSH. - **Tools → Visibility**: For viewing/hiding entities in the graphics window. - **Tools → Clipping**: For viewing a cross-section of the geometry or the mesh. By holding down left button in options A, B, C or D of the planes section and dragging the mouse, we can change continuously the plane position or the orientation. - **Tools → Manipulator**: For controlling precisely the orientation view in the graphics window and the scales (zoom) in each direction independently. At the bottom-left position there is a drop-down menu for changing some general options of GMSH. At nearly the same position, located at the bottom bar, there is some graphics commands for setting some view directions, rotate the view, or activating the mouse selection. The O option allows us to set also more graphics settings (also reachable by double-left-clicking in the graphics window). Finally, by left-clicking at the free space on this bottom bar we can view/hide the message console. The graphics windows is where the model and results are shown. With the mouse, we can change continuously the orientation view: left button: rotate, right button: pan (translate), middle button: zoom with respect to the mouse position. Ctrl+left button: region zoom, Ctrl+right button: return to the default view. All mouse and keyboard options can be obtained at the menu bar Help → Keyboard and Mouse Usage. The menu window allows us to access all option for the model (geometry and mesh) generation. The different submenus are shown in Figure 9. ![Figure 9. Submenus in the menu window.](image-url) The geometry submenu is for generation of geometric entities (points, lines, surfaces and volumes) as well as physical groups. The basic option is Add (also for defining parameters), but we can also observe the different geometric operations that are available in GMSH (translation, rotation, etc.). Here we can also delete already created entities. After executing these commands, in general, we have the options of q for ending/aborting the addition of entities and 'e' for adding the defined entity, but more options can appear, depending on the specific entity and operation. The different options in this geometry menu are shown in Figure 10. ![Figure 10. Options of the geometry submenu.](image) The mesh submenu allows to control how the mesh will be generated. The different options of this submenu is shown in Figure 11. ![Figure 11. Options of the mesh submenu.](image) Let's illustrate the GUI usage with one simple example. We will generate a 3D membrane, with length lm (lm=1), width wm (wm=1) and a thickness th (th=0.2). We will define the element size near points as lc (lc=0.1). First of all, we will define the parameters that we will use. We define them in Geometry → Elementary entities → Add → Parameter. We just have to click the add button once we have defined the different fields of the parameter. The appearing windows is shown in Figure 12, where it is illustrated the definition of the lc parameter. ![Figure 12. Window for creating a parameter.](image) For creating the needed points (in our case, we only need to create four for creating also the four boundary lines of the membrane) we can just click on the Point sheet on the same window, or just click on Geometry → Elementary entities → Add → Point. Points can be generated by clicking on the graphics window where we want to place the point, or by writing the coordinates in the Point window, that it is shown in Figure 13. Be careful not to place the mouse on the graphics windows after you have filled the fields and before clicking the Add button because the coordinates of the mouse will overwrite the values in the coordinate fields. If you want to do it by clicking on the graphics window, use the Shift button to fix the coordinates in the windows even if we move the mouse. ![Figure 13. Window for creating a point.](image) After creating all four points (click q to end the point creation), as illustrated in Figure 14 (left), we have to generate the lines. We have to click on Geometry → Elementary entities → Add → Straight line, and click on the two end points that will define the line. The line is created with a direction, from the first point selected to the second one. We will end up with the four lines after this process, as shown in Figure 14 (right). Click also q for finishing the creation of lines. Figure 14. The four points (left) and the four lines (right) created for the membrane. Now we will create the surface. In this case, we will create a ruled surface (i.e., a surface that can be meshed using the transfinite interpolation). Therefore, we have to click on Geometry → Elementary entities → Add → Ruled Surface. In this case, we just have to click on one line, and GMSH will find the rest (there is no other possibilities for creating a surface in our case). We have to press e to finish the selection of lines for the surface (or q for aborting the line selection if we would have not selected a right set of lines), and afterwards press q for definitely creating the surface. The surface appears as shown in Figure 15. The dashed lines indicates that the surface has been created. Figure 15. Created ruled surface of the membrane. Before creating the volume, we will mesh the surface. In this way, when we create the volume (by translation of this already created surface), we will also create simultaneously the volume mesh. For this, we have to indicate two things: - We want to use transfinite interpolation for creating a regular mesh. This is done by clicking on Mesh → Define → Transfinite → Surface and selecting the surface (clicking on the dashed lines). (we can select all related entities in an area by keeping pressed the Ctrl key, while defining an area with the mouse). - We want to generate square elements and not triangular. This is done by clicking on Mesh → Define → Recombine and selecting the same surface. In order to check how the surface elements will be, we can generate the mesh by clicking on \textit{Mesh} \rightarrow \textit{2D}. The results should be like shown in Figure 16 with a mesh of 10x10 square elements. Clear them with the bar menu option \textit{File} \rightarrow \textit{Clear}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{meshed_surface.png} \caption{Meshed surface.} \end{figure} Now, let's create the volume. We will use the extrude option \textit{Geometry} \rightarrow \textit{Elementary entities} \rightarrow \textit{Translate} \rightarrow \textit{Extrude surface}. We have to enter the value \(th\) value in the Z Component field. Unfortunately, not all options are available in the GUI. For specifying the number of elements in the extruded Z direction and indicating that we want cube-shaped elements, we will have to edit the geo file and include \textit{Layers\{3\}; Recombine;} inside the last field of the extrude command. The instruction would be something like: \textit{Extrude \{0, 0, th\} \{ Surface\{5\}; Layers\{3\}; Recombine; \}. Save the file and run this file making \textit{File} \rightarrow \textit{Open}. By generating the 3D mesh, clicking on \textit{Mesh} \rightarrow \textit{3D}, we should obtain something similar to what is shown in Figure 17. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{meshed_volume.png} \caption{Meshed volume.} \end{figure} Acronyms: 2D: Two-dimensional 3D: Three-dimensional CAD: Computer Aided Design FEM: Finite Element Method GPL: General Public License. RBC: Red Blood Cell
{"Source-Url": "http://diposit.ub.edu/dspace/bitstream/2445/63888/1/GMSH-Guide_for_mesh_generation_v02.pdf", "len_cl100k_base": 9918, "olmocr-version": "0.1.53", "pdf-total-pages": 27, "total-fallback-pages": 0, "total-input-tokens": 56781, "total-output-tokens": 11212, "length": "2e13", "weborganizer": {"__label__adult": 0.00039505958557128906, "__label__art_design": 0.0036468505859375, "__label__crime_law": 0.0004200935363769531, "__label__education_jobs": 0.0024471282958984375, "__label__entertainment": 0.00019466876983642575, "__label__fashion_beauty": 0.000316619873046875, "__label__finance_business": 0.0004429817199707031, "__label__food_dining": 0.0003592967987060547, "__label__games": 0.0013332366943359375, "__label__hardware": 0.0023651123046875, "__label__health": 0.0010061264038085938, "__label__history": 0.0007090568542480469, "__label__home_hobbies": 0.0003306865692138672, "__label__industrial": 0.0014600753784179688, "__label__literature": 0.0003426074981689453, "__label__politics": 0.00028586387634277344, "__label__religion": 0.000903606414794922, "__label__science_tech": 0.386474609375, "__label__social_life": 0.00017571449279785156, "__label__software": 0.098876953125, "__label__software_dev": 0.496337890625, "__label__sports_fitness": 0.0004117488861083984, "__label__transportation": 0.0005240440368652344, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37605, 0.02672]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37605, 0.59718]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37605, 0.81649]], "google_gemma-3-12b-it_contains_pii": [[0, 97, false], [97, 284, null], [284, 1570, null], [1570, 4106, null], [4106, 6720, null], [6720, 7350, null], [7350, 9396, null], [9396, 11863, null], [11863, 13719, null], [13719, 16000, null], [16000, 18751, null], [18751, 20810, null], [20810, 22109, null], [22109, 23054, null], [23054, 23568, null], [23568, 25006, null], [25006, 25868, null], [25868, 27134, null], [27134, 27897, null], [27897, 29011, null], [29011, 29818, null], [29818, 31622, null], [31622, 32883, null], [32883, 34439, null], [34439, 35984, null], [35984, 37423, null], [37423, 37605, null]], "google_gemma-3-12b-it_is_public_document": [[0, 97, true], [97, 284, null], [284, 1570, null], [1570, 4106, null], [4106, 6720, null], [6720, 7350, null], [7350, 9396, null], [9396, 11863, null], [11863, 13719, null], [13719, 16000, null], [16000, 18751, null], [18751, 20810, null], [20810, 22109, null], [22109, 23054, null], [23054, 23568, null], [23568, 25006, null], [25006, 25868, null], [25868, 27134, null], [27134, 27897, null], [27897, 29011, null], [29011, 29818, null], [29818, 31622, null], [31622, 32883, null], [32883, 34439, null], [34439, 35984, null], [35984, 37423, null], [37423, 37605, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37605, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37605, null]], "pdf_page_numbers": [[0, 97, 1], [97, 284, 2], [284, 1570, 3], [1570, 4106, 4], [4106, 6720, 5], [6720, 7350, 6], [7350, 9396, 7], [9396, 11863, 8], [11863, 13719, 9], [13719, 16000, 10], [16000, 18751, 11], [18751, 20810, 12], [20810, 22109, 13], [22109, 23054, 14], [23054, 23568, 15], [23568, 25006, 16], [25006, 25868, 17], [25868, 27134, 18], [27134, 27897, 19], [27897, 29011, 20], [29011, 29818, 21], [29818, 31622, 22], [31622, 32883, 23], [32883, 34439, 24], [34439, 35984, 25], [35984, 37423, 26], [37423, 37605, 27]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37605, 0.01606]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
694cbfd26b39d9a45e5b9bd91ee8360459cb3e4a
Motivation Due to outdated version of current EPOS' compiler, the latest standard available is C++0x, a previous version of C++11. Furthermore many features cannot be used by the epos user, features such as the ones listed below. Some C++ new features unimplemented in C++0x of GCC 4.4.4 Lambda Lambda expressions were implemented in C++11 and are a mechanism for specifying a function object. The primary use for a lambda is to specify a simple action to be performed by some function. Lambda expressions bring some advantages, one of them is making the code easier to read. This might sound a little weird because some programmers think that lambda expressions by themselves are ugly, but consider this case: ```cpp An example of functional programming in c++ std::for_each( begin, end, doer ); ``` The problem with this is that the function (object) doer: - Specifies what's done in the loop - Yet somewhat hides what's actually done (you have to look up the function object's operator()'s implementation) - Must be defined in a different scope than the std::for_each call - Contains a certain amount of boilerplate code - Is often throw-away code that's not used for anything but this one loop construct Lambda considerably improves on the aspects listed above, using as example the std::sort, where the third argument is a binary function that accepts two elements in the range as arguments and returns a value convertible to bool. ```cpp On pre C++11 we could sort a vector in reverse order that way: bool comp(int a, int b) { return a > b; } int main() { std::vector<int> vint = {32,71,12,45,26,80,53,33}; std::sort(vint.begin(), vint.end(), comp); } ``` In C++11 we can use a lambda as the function comp: ```cpp In C++11 we could sort a vector in reverse order that way: std::sort(vint.begin(), vint.end(), [vint.begin(), vint.end()] (int a, int b) { return a > b; }); ``` int main() { std::vector<int> vint = {32,71,12,45,26,80,53,33}; //auto = bool (*)(int a, int b) auto comp = [] (int a, int b) { return a > b; }; std::sort(vint.begin(), vint.end(), comp); } By bringing the implementation of the function closer to where it is used the Lambda Expression makes the code easier to read. That way you don't need to break the reading flow of your code because you had to look elsewhere. An interesting aspect of lambda expressions in C++ is your capacity to capture variables. By “capture” we mean you can use a variable declared outside of the lambda inside the expression. Or, if you are using C++14 or above, your captured variable can have an initializing expression inside the lambda. Captured variables can have an initializing expression: auto timer = [val = system_clock::now()] { return system_clock::now() - val; }; // ... do stuff ... timer(); // returns time since timer creation In this example, val is assigned the current time which is then returned by the lambda expression. val doesn't need to be an existing variable, so this is in effect a mechanism for adding data members to the lambda. The types of these members are inferred by the compiler. Also, in C++14 we are allowed to capture move-only variables: auto p = make_unique<int>({10}); auto lmb = [p = move(p)] { return *p; } Null pointer constant The constant NULL can be implemented like: #define NULL 0 It may lead to ambiguity: int foo(int); int foo(int*); foo(NULL); // don't know if use foo(int) or foo(int*) To solve this ambiguity nullptr can be used instead of NULL, because nullptr is always consider a pointer type: int foo(int); int foo(int*); foo(nullptr); // always use foo(int*) Forward declarations for enums Now you can just declare an enum without specifying its elements, but you need to declare its storage It can be useful in headers like: ```cpp enum number : unsigned; std::ostream& operator<<(std::ostream &os, number &n); ``` So the enum can be specified at another file. ### Structured Bindings Structured Bindings provide an easy way to access all values of a tuple or user-defined type. **Consider the following struct:** ```cpp struct val{ int a; char b; float c; }; ``` **On Pre C++17 we would access each value individually as:** ```cpp val aa = { 12, 'a', 3.123 }; auto i = aa.a; auto f = aa.b; auto g = aa.c; ``` **Or using std::tie to unpack all of them as follows:** ```cpp val aa = { 12, 'a', 3.123 }; int i; char f; float h; std::tie(i, f, g) = aa; ``` **On C++17 we could use automatic type deduction combined with structured bindings to access all values in a single line v.g.:** ```cpp val aa = { 12, 'a', 3.123 }; auto [i, f, g] = aa; ``` ```cpp std::map<int, int> myMap = {{1, 2}, {3, 4}}; for (const auto & [k, v] : myMap) { // now you can using the map iterator } ``` ### Init Statement in conditionals This structure enables an init statement to if and switch as the init statement that while and for already have. As the while and for init statement, there is no leaking into the ambient scope and this structure prevents an explicit scope. **Consider the following leaking example:** ```cpp std::map<int, int> myMap = {{1, 2}, {3, 4}}; for (const auto & [k, v] : myMap) { // now you can using the map iterator ``` auto var = foo(); if (var != SUCCESS) { // ... } else { // ... } On above example var is leaking into the ambient scope. We could explicit create a new scope, avoiding the leaking: { auto var = foo(); if (var != SUCCESS) { // ... } else { // ... } } At least with the new Init Statement we can avoid the leaking and the explicit scope as: if (auto var = foo(); var != SUCCESS) { // ... } else { // ... } In resume, the init statement work as following: if (init; cond) { // ... } switch (init; cond) { // ... } Inline Variables This feature extends the idea of header defined inline functions to variables and constant. An inline variable means that it is immediately defined and potentially repeated between the translation units, and that the compiler ensures they will be the same between then. So now it is possible to define static variables or global variables at header level, letting the linker to always refer to a same entity even if the definition is seen more time due to multiple inclusion inside different translation units. So it is possible to do: struct MyClass { static inline const int value = 123; }; Instead of: ```cpp struct MyClass { static const int value; }; ``` With a separate source file to redefine it as following: ```cpp const int MyClass::value = 123; ``` ### Constexpr if This feature provides a static-if, allowing branches of an `if` statement to be discarded at compile-time based on a constant conditional expression. ```cpp if constexpr (cond) statement1; // Discarded if cond is false else statement2; // Discarded if cond is true ``` It can be well used on metaprogramming/template code. **E.g.: A metaprogramming fibonacci:** ```cpp template<int N> constexpr int fibonacci() { return fibonacci<N-1>() + fibonacci<N-2>(); } template<> constexpr int fibonacci<1>() { return 1; } template<> constexpr int fibonacci<0>() { return 0; } ``` Now it could be implemented with constexpr if as following: ```cpp template<int N> constexpr int fibonacci() { if constexpr (N>=2) return fibonacci<N-1>() + fibonacci<N-2>(); else return N; } ``` ### Folding expressions This feature makes possible to perform arithmetic operations over a parameter pack. This could be done on post C++11 by doing a recursion. **E.g. A sum:** ```cpp auto sum(){ return 0; } template<typename T1, typename ...T> auto sum(T1 s, T... ts) { return s + sum(ts...); } ``` Now it could be done simpler as: ```cpp template<typename ...Args> sum(Args ...args) { return (args + ...); } ``` This feature supports the following operations: + - * / % ^ & | ~ = < > << >> += -= *= /= %= ^= &= |= <<= >>= == != <= >= && || , .* ->* Moreover, the fold expressions has four forms and they differ by having or not an initial value and by the direction that the operation is reduced. ```cpp (... op pack) ``` Where `pack` is an unexpanded parameter pack, and `op` is one of the operations listed above. Here the expression is reduced from left to right with `op`. ```cpp (pack op ...) ``` Here the expression is reduced from right to left with `op`. ```cpp (init op ... op pack) ``` Where `init` is an initial value. Here the expression is reduced from left to right with `op` and initial value `init`. ```cpp (pack op ... op init) ``` Here the expression is reduced from right to left with `op` and initial value `init`. The initial value is crucial when the binary operation `op` has no default value, that is, when an operation does not have an default value for empty packs. So... Our main motivation is to make new features, as the ones listed above, available for the epos user and, if possible, to modernize EPOS making it even more readable. **Goals** In order to make available features from the latest C++ standard (as the ones listed above) , we will upgrade the EPOS G.C.C. compiler to 7.2.0, the newest version available. **Methodology** - Upgrade: Update to subsequent C++ standard version. - Refactoring: Correct errors due to previous upgrade. - Coding: Implement and test an EPOS application with the new standard. - Restart: Repeat the process. Tasks 1. Development of the project plan and literature review. 2. Compile G.C.C 7.2.0 for IA32 and make it work with EPOS. 3. Upgrade EPOS to C++14 standard. 4. Upgrade EPOS to C++17 standard. 5. Upgrade some EPOS' feature to C++17 standard. Deliverables - **D1** - Development of the project plan and literature review - **D2** - Toolchain used to make a G.C.C. 7.2.0 compatible EPOS and any required changes on EPOS to work properly with the new G.C.C. version. - **D3** - EPOS compatible with C++14 and an EPOS application with a feature from the C++14. - **D4** - EPOS compatible with C++17 and an EPOS application with a feature from the C++17. - **D5** - Some EPOS' feature with C++17. Schedule <table> <thead> <tr> <th>Task</th> <th>W1</th> <th>02/10</th> <th>W3</th> <th>W4</th> <th>06/10</th> <th>W5</th> <th>13/11</th> </tr> </thead> <tbody> <tr> <td>Task1</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Task2</td> <td>x</td> <td></td> <td>x</td> <td>x</td> <td>x</td> <td>x</td> <td></td> </tr> <tr> <td>Task3</td> <td></td> <td></td> <td>x</td> <td></td> <td>x</td> <td></td> <td>D3</td> </tr> <tr> <td>Task4</td> <td></td> <td></td> <td></td> <td></td> <td>x</td> <td></td> <td>D4</td> </tr> <tr> <td>Task5</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>D5</td> </tr> </tbody> </table> Toolchain Our objective is to use C++17 on EPOS. To active this out first step is to update EPOS's toolchain. The EPOS' toolchain is script that create an GCC cross-compiler for IA32 architecture compatible with EPOS. The new toolchain was build with Binutils 2.28.1, GCC 7.2.0 and Newlib 2.5.0. We used the script toolchain.sh to build it. To make sure that the toolchain was not based GLibC, we used prove.sh to compile and link a simple program and list its symbols, that way we could see if there is any link with GLibC. We get the following output that guarantees we are not using GLibC. ``` 00400014 B __bss_start 00400004 d __CTOR_END__ 00400000 d __CTOR_LIST__ 0040000c d __DTOR_END__ 00400008 d __DTOR_LIST__ 00400014 D __edata 004001b8 B __end 00000050 T __epos_app_entry ``` We have some problem running the toolchain script. In some computers it gives an error related to libstdc++-v3, that we couldn't figure out. The specs from the computer used are listed below and the installed packages are listed in packages.txt and with more information in packages_infos.txt. OS: Arch Linux x86_64 Host: 80E6 Lenovo Z40-70 Kernel: 4.13.8-1-ARCH CPU: Intel i7-4500U (4) @ 3.000GHz GPU: Intel Integrated Graphics GPU: NVIDIA GeForce 840M All files mentioned are available at https://gitlab.com/evandro-crr/epos-ia32-gcc-7.2.0.git and you can download the GCC 7.2.0 for IA32 architecture compatible with EPOS at https://gitlab.com/evandro-crr/epos-ia32-gcc-7.2.0/tree/master/ia32-gcc-7.2.0 Refactoring EPOS To use std=c++17 to make EPOS we had to refactor the source code to the new standard. It was a gradual work, first we changed to std=c++11, then to std=c++14 and finally to std=c++17. All process can be seen at this Git repository. At the time we successfully compiled the EPOS under a new standard we marked a commit with a tag. To build the EPOS' trunk with the new GCC using make we need to specify that we want to use our new compiler. There are two options: - add the epos-gcc-7.2.0/bin to $PATH, and change EPOS' madefs with ia32_COMP_PREFIX := epos-ia32- - change EPOS' madefs with ia32_COMP_PREFIX := <path-to-epos-gcc-7.2.0/bin>/epos-ia32- Errors while compiling EPOS to C++11 Here we describe all the errors we have found while compiling EPOS with -std=c++11 and executing make. We also described their meaning and how we solved them. All sections are related to Git commits in our repository, thus they can be replicated at will. sector .ctors and .dtors changed names to .init_array and .fini_array Around six years ago, GCC adopted the .init_array and .fini_array as a mean to initialize global variables. This new standard has already been supported by glibc since 1999 and its main features are: It guarantees the priority in the initialization of variables. It addresses an old quirk of the .ctors that is the fact it is iterated backwards, making the accesses to the disk really slow. During the global variables' initialization the method __do_global_ctors_aux, called by _init, iterating through the global constructors that was stored at sector .ctors. Due to the change of sectors name, the application falls to an error during the initialization. You can see that CTOR_END, where the iteration begin, point to an empty sector. It should be some thing like this: ``` sector .ctors 00400000 <__CTOR_LIST__>: 400000: ff (bad) 400001: ff (bad) 400002: ff (bad) 400003: ff 00 incl (%eax) 00400004 <__CTOR_END__>: 400004: 00 00 add %al,(%eax) ``` But there was a sector called .init_array. After some failed attempts to use the sector .init_array we decide to just disable the init/fini_array with flag --disable-initfini-array while compiling GCC. That makes the compiler does not use any of the new features. But we still had to change the sections names from ".ctors" and ".dtors" to ".init_array" and ".fini_array" even to access the constructors and destructors the old way. ``` src/architecture/ia32/ia32_crtend.c void _init() __attribute__((section(".init")));``` typedef void (*fptr)(void); static fptr __CTOR_END__[1] __attribute__((section(".init_array"))) = { (fptr)0 }; static fptr __DTOR_END__[1] __attribute__((section(".fine_array"))) = { (fptr)0 }; static void __do_global_ctors_aux() { fptr * p; for(p = __CTOR_END__ - 1; *p != (fptr) -1; p--) (*p)(); } void _init() { __do_global_ctors_aux(); } void __epos_app_entry() __attribute__ ((section(".init"), weak, alias ("_init")); src/architecture/ia32/ia32_crtbegin.c void _fini() __attribute__ ((section(".fini"))); typedef void (*fptr) (void); static fptr __CTOR_LIST__[1] __attribute__((used, section(".init_array"), aligned(sizeof(fptr)))) = { (fptr)(-1) }; static fptr __DTOR_LIST__[1] __attribute__((section(".fini_array"), aligned(sizeof(fptr)))) = { (fptr)(-1) }; static void __do_global_dtors_aux() { fptr * p; for(p = __DTOR_LIST__ + 1; *p; p++) (*p)(); } void _fini() { static int initialized = 0; if(!initialized) { initialized = 1; __do_global_dtors_aux(); } } If you want to know why this new standard was adopted we recommend reading this bugzilla discussion thread. union doesn't support flexible array member Error log union doesn't support flexible array member: In file included from bignum.cc:3:0: include/utility/bignum.h:29:20: error: flexible array member in union Digit data[]; ^ include/utility/bignum.h:33:20: error: flexible array member in union Digit data[]; ^ C specifies that, as a special case, the last element of a structure with more than one named member may have an incomplete array type; this is called a flexible array member. These flexible array members may be defined only in structures, not in unions. As a solution we used a C struct hack, we initialized the array with size 0. Actually this is a GCC non-standard extension, while compiling with GNU Standard (using `-pedantic-errors`) it does implies an error, but even in this case, it is possible to bypass by initializing the array with size 1 without getting compiling errors about this matter. As this is not an issue in this project, the array was initialized with the C struct hack. calloc signature has changed ``` Error log In file included from malloc.cc:4:0: include/utility/malloc.h: In function 'void* calloc(size_t, unsigned int)': include/utility/malloc.h:21:19: error: declaration of 'void* calloc(size_t, unsigned int)' conflicts with built-in declaration 'void* calloc(long unsigned int, long unsigned int)' [-Werror=builtin-declaration-mismatch] inline void * calloc(size_t n, unsigned int bytes) { ^ The calloc signature in C++ is void* calloc(size_t, size_t), as size_t is always big enough to store the biggest theoretically possible non-pointer object. On the newer version it is equivalent to long unsigned. To avoid further issues with maximum size of variables we changed the signature to void* calloc(size_t, size_t) (on our C++11 Git Tag though, it's still with long unsigned); since the size_t type changes his size accordingly. insert_first, insert_tail and insert_head was not declared in the scope Error log include/utility/list.h:486:25: error: 'insert_first' was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [-fpermissive] insert_first(e); ~~~~~~~~~~~~^~~ include/utility/list.h:486:25: note: declarations in dependent base 'EPOS::S::U::Simple_List<EPOS::S::U::Data_Observer<EPOS::S::U::Buffer<EPOS::S::NIC, EPOS::S::Ethernet::Frame, void, EPOS::S::U::Dummy>>, short unsigned int>, 'EPOS::S::U::List_Elements::Singly_Linked_Ordered<EPOS::S::U::Data_Observer<EPOS::S::U::Buffe r<EPOS::S::NIC, EPOS::S::Ethernet::Frame, void, EPOS::S::U::Dummy>>, short unsigned int>, short unsigned int> ' are not found by unqualified lookup include/utility/list.h:486:25: note: use 'this->insert_first' instead include/utility/list.h:497:28: error: 'insert_tail' was not declared in this scope, and no declarations were found by argument-dependent lookup at the point of instantiation [- As it is, we could not understand why these methods are out of scope since class Simple_Ordered_List is a specialization of Simple_List and those methods are implemented there. But explicitly using them was enough to solve it. size of _pmc_handler is CHANNELS again: ``` Error log include/architecture/ia32/pmu.h: In static member function 'static void EPOS::S::PMU_handler<VERSION>::perf_int_init() [with int VERSION = 6]': include/architecture/ia32/pmu.h:1155:17 error: iteration 2 invokes undefined behavior [-Werror=aggressive-loop-optimizations] _pmc_handler[i] = 0; ^~~~~~~~~~~ include/architecture/ia32/pmu.h:1154:27 note: within this loop for (int i=0;i<8; i++) ^ include/architecture/ia32/pmu.h: In static member function 'static void EPOS::S::PMU_handler<VERSION>::PMU_int_handler(const unsigned int&) [with int VERSION = 6]': include/architecture/ia32/pmu.h:1201:34 error: iteration 2 invokes undefined behavior [-Werror=aggressive-loop-optimizations] if ((_pmc_handler[i] != 0) && ((perf_ovf_msr&(1ULL << i)) != 0)) ^________________________ include/architecture/ia32/pmu.h:1198:29 note: within this loop for (Reg32 i = 0; i < CHANNELS-1; i++) ^________________________ ``` As we could understand -O2 optimization brings -faggressive-loop-optimizations optimization, this flag seems to disable the for condition, so GCC infers that inside the for boundary there are no signed integer overflows or out-of-bound array accesses. And at some point it can cause an infinite loop. As it seems when using pointer arithmetics GCC can not use -faggressive-loop-optimizations at those for's, "solving" the error. array subscript is above array bounds, GCC maybe the wrong one It is not the first time that GCC pointed false-positives "array subscript is above array bounds" error, but we could not be sure. Another anomaly is that any new access to this array causes this error. Changing from normal array access to pointer arithmetics access seems to solve it. When disabling all optimizations and using normal array access, there is no problem, so it make us conclude that it is caused by an optimization. Define Connection destructor as virtual Connection is derived from multiple classes and because one of them has virtual functions, the connection destructor must be virtual, this happens because it is unsafe to delete an instance of a derived class through a pointer to a base class if the base class does not have a virtual destructor, because there will be leak if the derived class (i.e. Connection) has any dynamically allocated objects. suggest parentheses around operand of '!' This is a warning brought by -Wall and it is a way that GCC tells the user to take more attention to some operations that, in general, people get wrong. Explaining with parenthesis which operator is in the and-operation solves this. Reg8 was undefined ``` include/tstp.h:210:9: error: 'typedef EPOS::S::CPU_Common::Reg8 EPOS::S::IEEE802_15_4::Reg8' is private within this context Reg8 length() const { return MTU; } // Fixme: placeholder In file included from include/tstp.h:3: , from tstp.cc:6: include/ieee802_15_4.h:18:23: note: declared private here typedef CPU::Reg8 Reg8; The code here seemed to be unfinished, we think the author just forgot to add the complete path so we completed it. gcc sees System::_preheap as char[[16]], so reinterpret cast was necessary GCC now checks allocation size on replacement new and throw a warning if you try to construct some thing bigger than the allocated space. ``` init_system.cc: In constructor 'EPOS::S::Init_System::Init_System()': init_system.cc:45:42: error: placement new constructing an object of type 'EPOS::S::Segment' and size '20' in a region of type 'char [16]' and size '16' [-Werror=placement-new=] System::_heap_segment = new (&System::_preheap[0]) Segment(HEAP_SIZE, WHITE, Segment::Flags::SYS); _preheap is declared as static char _preheap(Traits<System>::multiheap ? sizeof(Segment) : 0) + sizeof(Heap); and the placement new is made inside the if bellow. ``` src/init/init_system.cc if(Traits<System>::multiheap) { System::_heap_segment = new (&System::_preheap[0]) Segment(HEAP_SIZE, WHITE, Segment::Flags::SYS); System::_heap = new (&System::_preheap[sizeof(Segment)]) Heap(Address_Space(MMU::current()).attach(System::_heap_segment, Memory_Map::SYS_HEAP), System::_heap_segment->size()); } ``` So inside the if we are sure that _preheap has enough space for a Segment, but GCC doesn't see that way. The alternative was to use a reinterpret_cast<> to bypass GCC verification. ``` This is the code now with the reinterpret_cast if(Traits<System>::multiheap) { Segment *heap = reinterpret_cast<Segment*> (&System::_preheap[0]); new (heap) Segment(HEAP_SIZE, WHITE, Segment::Flags::SYS); ``` System::_heap_segment = heap; System::_heap = new (&System::_preheap[sizeof(Segment)]) Heap(Address_Space(MMU::current()).attach(System::_heap_segment, Memory_Map::SYS_HEAP), System::_heap_segment->size()); } declare Adapter destructor as virtual Error log In file included from kernel_binding.cc:4:0: include/framework/agent.h: In member function 'void EPOS::S::Agent::handle_ipc()': include/framework/agent.h:409:16: error: deleting object of polymorphic class type' 'EPOS::S::Adapter<EPOS::S::Port<EPOS::S::IPC>>' which has non-virtual destructor might cause undefined behavior [-Werror=delete-non-virtual-dtor] delete comm; ^ This error is the same as in "Define Connection destructor as virtual", and the solution is the same too, making Adapter destructor virtual. Perfect forwarding C++11 introduce the concept of Rvalue reference with &&, you can learn more about it in this link. That's what the following error is about. Error log include/framework/message.h:119:18: error: cannot bind rvalue reference of type 'const unsigned int&&' to lvalue of type 'const unsigned int' SERIALIZE(_parms, index, an ...); ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~ include/system/meta.h:196:6: note: initializing argument 3 of 'void EPOS::S::SERIALIZE(char*, int, const T&&) [with T = unsigned int]' void SERIALIZE(char * buf, int index, const T &a) { ~~~~~ SERIALIZE has as third argument a const T && so it necessarily needs to be a Rvalue, and in order to pass a Lvalue we use static_cast<T&&> to make it a Rvalue. Another problem is that function void Message::out(const Tn& ...) calls SERIALIZE forwarding a parameter pack. Because it is a pack we can not deduce the type of the parameters inside it, so it was necessary to implement the function move bellow, that can infer its type and then cast it to Rvalue. template <class T> inline T&& move(T& a) { return static_cast<T&&>(a); } The function move was used to infer the type of an for the cast. Change necessary in SERIALIZE call - SERIALIZE(_parms, index, an ...); + SERIALIZE(_parms, index, move(an)...); C++11 Example Just to use some features from C++11, the application producer_consumer was changed transferring the function consumer to a lambda function. ```cpp #include <utility/ostream.h> #include <thread.h> #include <semaphore.h> #include <alarm.h> using namespace EPOS; const int iterations = 100; OStream cout; const int BUF_SIZE = 16; char buffer[BUF_SIZE]; Semaphore empty(BUF_SIZE); Semaphore full(0); int main() { auto consumer = []() { int out = 0; for(int i = 0; i < iterations; i++) { full.p(); cout << "C<" << buffer[out] << \"\t\n"; out = (out + 1) % BUF_SIZE; Alarm::delay(5000); empty.v(); } return 0; }; Thread * cons = new Thread(consumer); // producer int in = 0; for(int i = 0; i < iterations; i++) { empty.p(); Alarm::delay(5000); buffer[in] = \'a\' + in; cout << "P->" << buffer[in] << \"\t\n"; in = (in + 1) % BUF_SIZE; full.v(); } cons->join(); cout << "The end!" << endl; } ``` The Thread constructor needs a template `<class... Ts> int (*)(Ts...)` as first parameter, but by using `auto` the type of consumer is defined as `main()::<lambda()>`. Instead of use `auto` consumer we could use `int (*consumer)()`, but it is way less readable. So an alternative was to change the Thread constructor as follow. **old definition** ```cpp template<typename ... Tn> Thread(int (* entry)(Tn ...), Tn ... an); template<typename ... Tn> Thread(const Configuration & conf, int (* entry)(Tn ...), Tn ... an); ``` **new definition** ```cpp template<class T, typename ... Tn> Thread(T entry, Tn ... an); template<class T, typename ... Tn> Thread(const Configuration & conf, T entry, Tn ... an); ``` By just being flexible with the parameter `entry`, and assigning it to a template `<class... Ts> int (*)(Ts...)` as `int (*_entry)(Tn...) = entry;` in the constructor's body, you pass any lambda that can be converted to a template `<class... Ts> int (*)(Ts...)`. Errors while compiling EPOS to C++14 In this section we describe the only error we have found while compiling EPOS with `-std=c++14` and executing `make`. As well its meaning and how we solved it. By pass operator delete (void *ptr, long unsigned size) **Error log** ``` lib/libsys_ia32.a(thread.o): In function `EPOS::S::Thread::~Thread()': thread.cc:/.text._ZN4EPOS1S6ThreadD2Ev+0x178): undefined reference to `operator delete(void*, unsigned long)' ``` The C++14 add two sized delete signature: `void operator delete(void* ptr, std::size_t size)` and `void operator delete[](void* ptr, std::size_t size)`. The idea is to minimize the overhead at deleting objects by passing the control of how many bytes will be deleted to the programmer. When using deletes operations without size, the object structure keeps its size in bytes to be used when the operation is called. With the new ones, one could change how the object is structured to do not store this information, saving space and time (because there will not be necessary to search for its size) by specifying the size in bytes of the structured being deleted. But to use it at EPOS it would be necessary to adapt the Heap implementation so it would not store the objects sizes, as well as add the support to this new delete operators. Because our main goal is to compile EPOS with C++14 we just added some simple functions to bypass the necessity of the sized deletes, as follow: ```c #include/utility/malloc.h void operator delete(void * ptr, size_t); void operator delete[](void * ptr, size_t); #include/utility/malloc.h void operator delete(void * object, size_t) { return free(object); } void operator delete[](void * object, size_t) { return free(object); } ``` Note that the error says that the undefined function is delete(void*, unsigned long) but we actually implemented `delete(void*, std::size_t)` (not in our c++14 tag) this happens because the delete signature uses `std::size_t` and its current implementation is `unsigned long` (as already discussed). Errors while compiling EPOS to C++17 In this section we describe the only error we have found while compiling EPOS with `-std=c++17` and executing `make`. As well its meaning and how we solved it. `register` keyword was deprecated in C++11 ``` Error log In file included from include/cpu.h:119:0, from include/utility/spin.h:6, from include/utility/heap.h:8, from heap.cc:3: include/architecture/ia32/cpu.h: In static member function 'static T EPOS::S::CPU::tsl(volatile T&)': include/architecture/ia32/cpu.h:343:20: error: ISO C++1z does not allow 'register' storage class specifier [-Werror=register] register T old = 1; ^~~ include/architecture/ia32/cpu.h: In static member function 'static T EPOS::S::CPU::finc(volatile T&)': include/architecture/ia32/cpu.h:350:20: error: ISO C++1z does not allow 'register' storage class specifier [-Werror=register] register T old = 1; ^~~ include/architecture/ia32/cpu.h: In static member function 'static T EPOS::S::CPU::fdec(volatile T&)': include/architecture/ia32/cpu.h:357:20: error: ISO C++1z does not allow 'register' storage class specifier [-Werror=register] register T old = -1; ^~~ include/architecture/ia32/cpu.h: In static member function 'static int EPOS::S::CPU::bsf(EPOS::S::CPU_Common::Log_Addr)': include/architecture/ia32/cpu.h:539:31: error: ISO C++1z does not allow 'register' storage class specifier [-Werror=register] register unsigned int pos; ``` The error addressed here is about the deprecated `register` keyword. It is a storage class specifier, a part of the `decl-specifier-seq` of a name's declaration syntax. Together with the scope of the name, storage class specifiers control two independent properties of the name: Its storage duration and its linkage. The `register` specifier indicates automatic storage duration. And the most important thing to understand the `register` keyword: its presence could be used as a hint for the optimizer that the declared variable will be heavily used; it is a hint to store its value in a CPU register. In C, the address of a `register` variable cannot be taken, but until C++17, a variable declared `register` was semantically indistinguishable from a variable declared without any storage class specifiers. And the hint given by `register` could be ignored, and in most implementations it would be ignored if the address of the variable was taken. So, in C++17, unlike C and previous C++ standards, variables cannot be declared `register`. We then removed the `register` keyword from the declarations above. This change does not cause much impact because it was just a hint to the optimizer as well because of its specific use cases. C++17 Example Just to use some features from C++17, we created a `fibonacci` application that uses `constexpr if`. ```cpp #include <utility/ostream.h> using namespace EPOS; OStream cout; template<int N> int fibonacci() { if constexpr (N>=2) return fibonacci<N-1>() + fibonacci<N-2>(); else return N; } int main() { cout << fibonacci<44>(); return 0; } ``` To use a `constexpr if statement` we need that the if condition to be a `constant expression`, to achieve this we used templates. **Conclusion** At the end of our project we reached some unexpected conclusions. We hoped that the new standard could bring some improvements to the actual design of EPOS, but we couldn’t find anything that could make a big impact in embedded systems specifically. Although it was a little disappointing we don’t think that all this work were for nothing. This could be the first step to upgrade EPOS to a future standard that really brings the wanted improvements. **Future Works** There are some aspects of our project that we couldn’t fix while working on it. Our toolchain, as already mentioned, only worked on a specific PC configuration and we couldn’t replicated it on other machines. One missing part of our work was a comparison between the assembly code generated by GCC 4.4.4 and GCC 7.2.0. This results would show if the new compiler is worth using. We also hope that anyone who wants to upgrade EPOS to a new standard in the future can use our work as a base and have less trouble making the transition. **Bibliography**
{"Source-Url": "https://epos.lisha.ufsc.br/tiki-print.php?page=EPOS+Post+CPP11&display=pdf", "len_cl100k_base": 8892, "olmocr-version": "0.1.46", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 42986, "total-output-tokens": 12274, "length": "2e13", "weborganizer": {"__label__adult": 0.00036454200744628906, "__label__art_design": 0.00033783912658691406, "__label__crime_law": 0.00020623207092285156, "__label__education_jobs": 0.00023603439331054688, "__label__entertainment": 5.4717063903808594e-05, "__label__fashion_beauty": 0.00012421607971191406, "__label__finance_business": 0.0001239776611328125, "__label__food_dining": 0.00037026405334472656, "__label__games": 0.00044846534729003906, "__label__hardware": 0.0009431838989257812, "__label__health": 0.0002751350402832031, "__label__history": 0.00020182132720947263, "__label__home_hobbies": 8.088350296020508e-05, "__label__industrial": 0.0002925395965576172, "__label__literature": 0.0001747608184814453, "__label__politics": 0.00018477439880371096, "__label__religion": 0.0004498958587646485, "__label__science_tech": 0.003892898559570313, "__label__social_life": 7.069110870361328e-05, "__label__software": 0.0036716461181640625, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00029397010803222656, "__label__transportation": 0.0004277229309082031, "__label__travel": 0.0002257823944091797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39415, 0.02613]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39415, 0.26974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39415, 0.79293]], "google_gemma-3-12b-it_contains_pii": [[0, 1905, false], [1905, 3764, null], [3764, 5238, null], [5238, 6429, null], [6429, 7742, null], [7742, 9468, null], [9468, 11358, null], [11358, 13302, null], [13302, 14708, null], [14708, 15916, null], [15916, 18793, null], [18793, 20311, null], [20311, 21417, null], [21417, 23631, null], [23631, 25710, null], [25710, 26816, null], [26816, 29194, null], [29194, 31304, null], [31304, 32936, null], [32936, 35721, null], [35721, 39415, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1905, true], [1905, 3764, null], [3764, 5238, null], [5238, 6429, null], [6429, 7742, null], [7742, 9468, null], [9468, 11358, null], [11358, 13302, null], [13302, 14708, null], [14708, 15916, null], [15916, 18793, null], [18793, 20311, null], [20311, 21417, null], [21417, 23631, null], [23631, 25710, null], [25710, 26816, null], [26816, 29194, null], [29194, 31304, null], [31304, 32936, null], [32936, 35721, null], [35721, 39415, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39415, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39415, null]], "pdf_page_numbers": [[0, 1905, 1], [1905, 3764, 2], [3764, 5238, 3], [5238, 6429, 4], [6429, 7742, 5], [7742, 9468, 6], [9468, 11358, 7], [11358, 13302, 8], [13302, 14708, 9], [14708, 15916, 10], [15916, 18793, 11], [18793, 20311, 12], [20311, 21417, 13], [21417, 23631, 14], [23631, 25710, 15], [25710, 26816, 16], [26816, 29194, 17], [29194, 31304, 18], [31304, 32936, 19], [32936, 35721, 20], [35721, 39415, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39415, 0.01085]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
fcad5bfbb8581881276b1e1dc4feeabd8cefabe5
[REMOVED]
{"Source-Url": "https://static-curis.ku.dk/portal/files/239958957/Conf2020.pdf", "len_cl100k_base": 13259, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 67646, "total-output-tokens": 16744, "length": "2e13", "weborganizer": {"__label__adult": 0.0005588531494140625, "__label__art_design": 0.0005731582641601562, "__label__crime_law": 0.0007634162902832031, "__label__education_jobs": 0.0009107589721679688, "__label__entertainment": 0.0001099705696105957, "__label__fashion_beauty": 0.0002448558807373047, "__label__finance_business": 0.0026073455810546875, "__label__food_dining": 0.0005450248718261719, "__label__games": 0.0007910728454589844, "__label__hardware": 0.0009965896606445312, "__label__health": 0.0009589195251464844, "__label__history": 0.0003638267517089844, "__label__home_hobbies": 0.0001729726791381836, "__label__industrial": 0.0008831024169921875, "__label__literature": 0.0004940032958984375, "__label__politics": 0.0005865097045898438, "__label__religion": 0.0005517005920410156, "__label__science_tech": 0.07244873046875, "__label__social_life": 0.00012218952178955078, "__label__software": 0.007061004638671875, "__label__software_dev": 0.90673828125, "__label__sports_fitness": 0.00033664703369140625, "__label__transportation": 0.0009069442749023438, "__label__travel": 0.0002524852752685547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59072, 0.02624]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59072, 0.41769]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59072, 0.83126]], "google_gemma-3-12b-it_contains_pii": [[0, 454, false], [454, 3392, null], [3392, 6542, null], [6542, 9990, null], [9990, 12951, null], [12951, 16604, null], [16604, 19362, null], [19362, 23109, null], [23109, 26576, null], [26576, 29401, null], [29401, 32959, null], [32959, 36136, null], [36136, 39147, null], [39147, 41698, null], [41698, 44674, null], [44674, 48396, null], [48396, 52359, null], [52359, 55692, null], [55692, 59072, null]], "google_gemma-3-12b-it_is_public_document": [[0, 454, true], [454, 3392, null], [3392, 6542, null], [6542, 9990, null], [9990, 12951, null], [12951, 16604, null], [16604, 19362, null], [19362, 23109, null], [23109, 26576, null], [26576, 29401, null], [29401, 32959, null], [32959, 36136, null], [36136, 39147, null], [39147, 41698, null], [41698, 44674, null], [44674, 48396, null], [48396, 52359, null], [52359, 55692, null], [55692, 59072, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59072, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59072, null]], "pdf_page_numbers": [[0, 454, 1], [454, 3392, 2], [3392, 6542, 3], [6542, 9990, 4], [9990, 12951, 5], [12951, 16604, 6], [16604, 19362, 7], [19362, 23109, 8], [23109, 26576, 9], [26576, 29401, 10], [29401, 32959, 11], [32959, 36136, 12], [36136, 39147, 13], [39147, 41698, 14], [41698, 44674, 15], [44674, 48396, 16], [48396, 52359, 17], [52359, 55692, 18], [55692, 59072, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59072, 0.00258]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
8ca4434de4b110ef35be0f1b0e56209ba2918a6a
D-LITE: Distributed Logic for Internet of Things sErvices Sylvain Cherrier, Yacine Ghamri-Doudane, Stéphane Lohier, Gilles Roussel To cite this version: Sylvain Cherrier, Yacine Ghamri-Doudane, Stéphane Lohier, Gilles Roussel. D-LITE: Distributed Logic for Internet of Things sErvices. The 2011 IEEE International Conference on Internet of Things (iThings 2011), Oct 2011, Dalian, China. 9 pp. hal-00693377 HAL Id: hal-00693377 https://hal.science/hal-00693377 Submitted on 2 May 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. D-LITe : Distributed Logic for Internet of Things sErvices Sylvain Cherrier*, Yacine M. Ghamri-Doudane†, Stéphane Lohier* and Gilles Roussel* * Université Paris-Est , Institut Gaspard Monge (LIGM) 77454 Marne-la-Vallée Cedex 2 Email: [firstname.lastname]@univ-paris-est.fr † Ecole Nationale Supérieure d’Informatique pour l’Industrie et l’Entreprise (ENSIIE) 1 square de la résistance, 91025 Evry Cedex Email: yacine.ghamri@ensiie.fr Abstract—Smartphones, PDA, Sensors, Actuators, Phidgets and Smart Objects (i.e. objects with processing and networking capabilities) are more and more present in everyday’s life. Merging all these technologies with the Internet is often described as ‘Internet of Things’ (IoT). In the IoT vision, Things around us provide a pervasive network of interacting and interconnected devices. However building IoT applications is a long and arduous work, reserved for specialists, requiring specific knowledges in terms of network protocols and programming languages. The lack of widespread and easy-to-configure solutions is an obstacle for the development of this area. A universal framework, offering simplification and standardization, could facilitate the emergence of this promising field in terms of applications and business. IoT needs a solid foundation for rapid, simple development and deployment of new services. In this paper, we present D-LITe, a universal framework for building IoT applications over heterogeneous sets of small devices. D-LITe offers solutions for deploying application’s logic, and executing it on Smart Objects despite their heterogeneity. An implementation of D-LITe on tiny devices, such as TelosB motes, allows to show that our framework is realistic even with the constraints of such devices. Keywords—Web of Things; Services Choreography Architecture; Distributed logic; I. INTRODUCTION Objects become more and more clever and interacting devices. Manufacturers introduce processing power and networking technologies in common objects leading to the concept of Smart Objects and to IoT or Web of Things (WoT) is the web version of IoT, easier to use for end-users). In this paradigm, Things offer a digital environment, sensing and acting on real world. Users are able to deal with their digital environment. “Home automation” is an example of IoT, in which people organize services offered by things present in their living environment. But there is a main issue that still prevents the raise and wide deployment of IoT: The multitude of Smart Objects (such as Sensors) uses different languages (C, NesC, Java...), different Application Programming Interfaces (Arduino, ZigBee Application...), different Operating Systems (TinyOS, Contiki...) through different network protocol stacks (IEEE 802.15.4, Zigbee, 6LowPAN) that may be mutually incompatible unlike the widely spread IP Network used by more powerful objects. Creating an application dealing with each kind of smart objects becomes a specific work, performed for a specific type of hardware (Operating System, Network technology) and with specific programming tools (languages, API). It involves the need of a gateway to be accessed from the Internet and to communicate with other objects. Another issue is the deployment of applications, mainly consisting of ROM flashing on each smart object, that requires human intervention and manipulation. This leads to important time and cost overhead. Creating IoT applications is complex and time-consuming, hardware dependant, and hardly scalable. In this paper we intend to solve these problems by proposing a universal framework and architecture: D-LITe, a new Distributed Logic for Internet of Things sErvices creation and deployment. D-LITe allows to design simple, scalable and easy-to-maintain applications and deploys them over heterogeneous platforms. The reminder of this paper is organized as follows: First, we present the related works (Section II) and the background (Section III) attached to our solution. The overall design of D-LITe is described in Section IV, while Section V focuses on the protocols and languages used by D-LITe. Section VI deals with the Implementation and Validation of D-LITe. Finally, concluding remarks and future research directions are given. II. RELATED WORKS Internet of Things has many definitions [6]. The IoT paradigm incorporates other technologies such as pervasive or ubiquitous computing as well as ambient intelligence (AmI) [7], [12]. To realize IoT applications, programmers or users have to deal with multiple devices that are not interoperable. There are many approaches on how to program such network applications. We consider macro programming as described in [17] “programming the sensor network as a whole, rather than writing low-level software to drive individual nodes”. Many objects that think come with processing capabilities, but no code to use it. “for years, closed networks” were “deployed for a specific application... we argue that the next generation WSN require customizable **architecture**” [24]. Giving every node the ability to interact with any other seems to be a solid basis for building distributed applications. Authors in [24] propose to give standard access to nodes to offer such a customizable architecture. Every node in IoT applications should be reachable and usable. However no common architecture is provided. ZigBee Alliance [4] has developed adequate protocols for Sensors. ZigBee is a complete solution, based on the use of the IEEE 802.15.4 at the lower layer. It defines the reminder of the network architecture up to services (called ZigBee Profiles). For example, ZigBee Home Automation is one of those Profiles “enabling smart homes that can control appliances, lighting, environment, energy management, and security as well as expand to connect with other ZigBee network” [5]. Nevertheless many technologies (SmartPhone, PC, sensors, actuators...) are involved in Home Automation so that a gateway is mandatory to connect to other networks (Internet). An end-to-end communication could be degraded by such a gateway. Protocol’s conduct, exchanges between nodes, the size of exchanged messages can be so different that their translation may be particularly difficult. The specialized protocol’s dynamics on one side may be unsuitable for the other side. All these differences can be difficult to solve, or just very penalizing in terms of adaptation, effectiveness, and response time. Dynamicity, scalability and reconfiguration are also issues. Users may want to use the computing capabilities of smart objects and take advantage of the versatility of programmable devices. Changing the interactions of household objects when integrating a new device, or simply changing the behaviour of the total application is an expected asset of the IoT. Until now human interventions are still required to set and update nodes. Reprogramming “over the air” (OAP, Over the Air Protocol) answers that issue [30]. OAP is proposed in SYNAPSE [25], Deluge [18] or Dynamic TinyOS [22]. SYNAPSE and DELUGE mainly focus on how to organize a reliable transfer on a non-reliable wireless network, while Dynamic TinyOS deals with efficient software updating, but is strongly coupled with TinyOS Operating System. Our aim is to provide a solution loosely coupled to network protocol stack, operating system, language and hardware. ## III. BACKGROUND AND VISION Such as Generic Virtual Machine for a high-level programming language\(^1\), D-LITE constitutes a basic framework for building simple and universal applications. Many concepts of quite distant areas are melted in D-LITE to give the end-user a simple way to design and deploy logical applications on nodes. \(^1\)JVM for Java, for example, or Parrot for Perl ### A. From Internet to Smart Objects D-LITE nodes need to be accessed from the Internet. IPv6 Protocol seems to be a good candidate for that purpose, because this standard is most likely to be used to deal with billions of nodes. By using header compression mechanisms, 6LowPAN [27] proposes a solution for IPv6 compatibility over IEEE 802.15.4 networks. It gives universal access to data collected by sensors and actions done by actuators. Even if 6LowPAN is restricted to support only UDP, nodes are able to offer all kind of Web’s well-known services because it uses IP. 6LowPAN turns motes from connected data collectors into real small data servers. ### B. Accessing services: SOAP, REST and CoAP In an end-to-end communication, motes can be considered as service providers. To access the provided services, Service Oriented Architecture (SOA) is a well-known solution [14]. The idea of using such paradigm for Sensor Network is presented in TinySOA [24]. SOA introduces loose coupling between services and applications as well as hardware independence. Many protocols realize SOA. One of them is SOAP [3], but it is a very verbose protocol (i.e. consuming bandwidth and requiring important processing). Sensors Networks have a very limited bandwidth, that is why D-LITE is organized according to REST approach. REST architecture [15] is an alternative to SOAP for distributed applications, and has many advantages. Using standard HTTP methods, REST is lightweight and simple to adapt to our purpose. However a major issue remains: because of smart objects’ memory size, TCP and moreover HTTP (needed by REST architecture) are very hard to fit [6] in constrained devices. To address this issue, CoAP [26] offers the same characteristics as REST: CoAP ”extends the REST architecture to a suitable form for the most constrained nodes” of Sensors Network [28]. Furthermore, CoAP is build over 6LowPAN, and already exist in Contiki operating system [11] for Wireless Sensor Networks. By implementing HTTP over UDP, and using compression of HTTP methods, CoAP is designed to simply permit translations between standard and universal REST commands from the Internet and a 6LowPAN Network, while being particularly suitable to the limited payload of smart objects. ### C. Services: Choreography and Orchestration We consider that an important part of IoT applications can be designed as a collaboration between nodes. The whole application’s logic can be spread into small autonomous part on each node. To combine services offered by motes (data collected or possible actions), SOA is divided in two approaches: Services Orchestration or Services Choreography [23], [10]. They mainly differ in the centralized approach of orchestration compared to the collaborative form of choreography. D-LITE uses the Choreography concept. In our Choreography, there is no central controller, each node is autonomous. The node knows what to do, and reacts to context’s change. In D-LITE’s choreography, each node is like a dancer. Each dancer knows his steps, and reacts on events of the very near environment. There is no centralized control of any supervisor; decisions are mainly made at the lowest level. On the contrary, in the usual definition of services orchestration, a central point would control all exchanges. The central point would call the services offered by nodes and compute results. No nodes would act on its own. Because it uses choreography, D-LITE delegates small parts of the global application to each participant, using processing capacities closer to the needs, saving bandwidth and therefore energy. D. Designs Patterns used in D-LITE D-LITE uses Gang of Four (GoF) [16] Design Pattern (DP) Observer and Strategy. Some protocols propose Observer DP in Sensor Networks. Using such protocols, nodes can subscribe to others that publish data as in mqtt [20] or in TinyCops [19]. However mqtt is not based on 6LowPAN but on Zigbee, and TinyCops is a TinyOS module. Consequently they are not usable in a heterogeneous environment. D-LITE implements this DP on its own. Strategy DP dynamically changes an object’s behaviour. Basically, Strategy delegates one object’s logic to another object, chosen inside a set of objects each implementing a different version of the same command. This can be managed and changed “on the fly”. D-LITE is inspired by Strategy. D-LITE installs a static piece of code on each node. That code offers an access to a dynamic part of the application that can be configured or changed. This dynamic part is under the control of a rule analyzer. The rule analyzer can execute a logical description of node’s expected behaviour depicted by a “set of rules”. This logical description is variable, can be set through the network and dynamically changed. E. Finite State Transducers (FST) To describe our choreography, a tool to program each node is required. As presented in Section IV-C, we chose to use macro-programming approach shown for example in [29], [21]. D-LITE uses Finite State Transducers (FSTs) to describe the application’s logic. Describing this logic with an Automaton rather than a programming language is somewhat limited, but has absolute advantages: universality, very low memory footprint for the parser, and very concise expression of the description. Automata are hardware independent, text-based, and easy to learn. FST are Finite State Machines (FSM) with an additional output Alphabet. They are often used in Natural Language Processing. In D-LITE, input and output alphabets are the messages exchanged by nodes through the network. States are the node’s reaction to received messages. The idea of using a Transducer to program a sensor, and the rule’s approach, were inspired by J. Balsiart and al papers [8], [9]. IV. D-LITE : AN ARCHITECTURE TO DEPLOY APPLICATION’S LOGIC D-LITE is organized to enable the writing of small cooperating units realizing an application, like the cells of a spreadsheet are used in end-user development. A. Overview of D-LITE Distributed Framework D-LITE is a distributed framework for realizing IoT applications. It consists in building applications as a collaboration of smaller logical units. A mote\(^2\) is more than a simple sensor or actuator. Because of its small computing capabilities, this kind of node can do additional processing. For this purpose, D-LITE is installed on each node. As it uses standard protocols (IPv6 and REST), D-LITE offers a universal access, hiding specificity of the different hardwares used. The REST access given by D-LITE is used to deploy orders (configuration, FST) on each node. As shown in Figure 1: 1) An end-user collects information about nodes capabilities. 2) He expresses his need: he describes a sequence of interactions between elements. 3) This sequence is then transformed as a set of FSTs (one FST per node). 4) Each node will receive its own FST and other configuration information. The D-LITE architecture allows an end-user to transmit rules and configuration to each node. Each D-LITE enabled node contains a rules analyzer to execute the FST. D-LITE nodes also have a messaging service to interact with each other (Figure 2). \(^2\)Like Crossbow TelosB or Imote, Oracle SunSpot, or Arduino Uno. D-LITE is mainly design for motes, even if some more powerful hardwares are supported. of alphabets. Figures 3, 4 and 5 are simple examples of applications, using two types of nodes: switches and lights. 1) Example 1: a switch and a light: In the very simple example given in Figure 3, a switch (sensor) and a light (actuator) can be set this way: When the light receives a "up" message, it moves to "ON" state. When it receives a "down" message, it sets its state to "OFF". When somebody presses the switch button, its state moves to "Pressed", and it sends an "up" message. Similarly when somebody presses it again, it moves to "Released" state, and it sends a "down" message. The two FST representing this two nodes are: - for the switch: \( Q = ("Pressed", "Released") \), \( \Sigma = ("Press","Release"), \) \( \Gamma = ("up","down") \), \( I = ("Released"), F = (\epsilon) \) and \( \delta \) is described in Figure 3. - For the light: \( Q = ("on","off"), \) \( \Sigma = ("up","down"), \) \( \Gamma = (\epsilon), I = ("off"), F = (\epsilon) \) and \( \delta \) is described in Figure 3. This is the way an end-user can simply program a standard switch/light pair. 2) Example 2: Introducing a new state: Figure 4 introduces a new behaviour not planned in the process of light switching: a delay. Our application offers a Time service (a time message is send every 10 seconds). When receiving the "down" message, the light stays on and waits for a "time" message from the Time service. Then receiving this "time" message, the light switches off. To realize this feature, we introduce a logical state on the on/off light process: a Wait State. This state has no physical action but represents the fact that the light is now waiting for another message. This is a logical state. After receiving this time event, the light moves to the "Off" state, and really switches off. To implement this example, the switch FST remains unchanged. However, the light FST becomes: \( Q = ("on","off"), \) \( \Sigma = ("up","down","time"), \) \( \Gamma = (\epsilon), I = ("off"), F = (\epsilon) \) and \( \delta \) is described in Figure 4. We also add the light as an observer of Time service. B. Distributed Application Choreography D-LiTe is based on the idea that Internet of Things applications can often be seen as a Finite State Machines choreography. D-LiTe enables to depict each part of the application’s logic as FSTs which will be executed in several nodes. Each FST (a set of rules) can be dynamically and quickly send to the proper node. An end-user has to organize his thoughts to describe his application as a choreography of Transducers, just like he organizes his formulas in each cell of a spreadsheet (Figure 1). When a node receives a message or changes state, it affects other nodes, just like cells in a spreadsheet react to changes in other cells; Updating their content results in a chain reaction on depending cells (Figure 2). As choreography starts, every node may receive a message, because something has happened. That message is inspected by the algorithm in charge of the FST’s execution. If a rule matches the current state and this received message, the node’s state changes, and the output message defined in the rule is sent to Observers. C. Node’s Logic Representation using Transducers A Finite State Transducer has a formal representation as a 6-tuple \( T(Q, \Sigma, \Gamma, I, F, \delta) \). D-LiTe defines the meaning of each element as follow: - \( Q \) represents all States for a particular node, - \( \Sigma \) are Input Messages handled by a particular node, - \( \Gamma \) are Output Messages a particular node can send, - \( I \) is the Initial State (only one in D-LiTe), - \( F \) stands for Final States, - \( \delta \) contains transitions (which are our "set of rules"). \( \epsilon \) element stands for empty. The main adaptation introduced in D-LiTe is Input Messages and Output Messages in place of arrays. Figures 3, 4 and 5 are simple examples of applications, using two types of nodes: switches and lights. 1) Example 1: a switch and a light: In the very simple example given in Figure 3, a switch (sensor) and a light (actuator) can be set this way: When the light receives a "up" message, it moves to "ON" state. When it receives a "down" message, it sets its state to "OFF". When somebody presses the switch button, its state moves to "Pressed", and it sends an "up" message. Similarly when somebody presses it again, it moves to "Released" state, and it sends a "down" message. The two FST representing this two nodes are: - for the switch: \( Q = ("Pressed", "Released"), \) \( \Sigma = ("Press","Release"), \) \( \Gamma = ("up","down") , I = ("Released"), F = (\epsilon) \) and \( \delta \) is described in Figure 3. - For the light: \( Q = ("on","off"), \) \( \Sigma = ("up","down"), \) \( \Gamma = (\epsilon), I = ("off"), F = (\epsilon) \) and \( \delta \) is described in Figure 3. This is the way an end-user can simply program a standard switch/light pair. 2) Example 2: Introducing a new state: Figure 4 introduces a new behaviour not planned in the process of light switching: a delay. Our application offers a Time service (a time message is send every 10 seconds). When receiving the "down" message, the light stays on and waits for a "time" message from the Time service. Then receiving this "time" message, the light switches off. To realize this feature, we introduce a logical state on the on/off light process: a Wait State. This state has no physical action but represents the fact that the light is now waiting for another message. This is a logical state. After receiving this time event, the light moves to the "Off" state, and really switches off. To implement this example, the switch FST remains unchanged. However, the light FST becomes: \( Q = ("on","off"), \) \( \Sigma = ("up","down","time"), \) \( \Gamma = (\epsilon), I = ("off"), F = (\epsilon) \) and \( \delta \) is described in Figure 4. We also add the light as an observer of Time service. 3) Example 3: a semantic loss: Figure 5 represents a 3-way (or more) switching. The user merely needs to define a single message (for example “action”) to be sent by each switch when the button is used (vs. two messages in previous examples). No matter how the button is now (pressed or released), the light’s state has to change. For this purpose, the user explains that “Press” or “Release” events on the switch send a unique message: “action” (a poor semantic message). There is no other state needed on the switch. The light subscribes to all switches, and each switch sends only “action” message when pressed or released. By receiving the “action” message, the light changes state from “on” to “off” and vice versa. The corresponding two FSTs are: - For each switch: $Q = \{ \text{“Nop”} \}$, $\Sigma = \{ \text{“Press”}, \text{“Release”} \}$, $\Gamma = \{ \text{“action”} \}$, $I = (\text{“Nop”})$, $F = (\epsilon)$ and $\delta$ is described in Figure 5. - For the light: $Q = \{ \text{“on”}, \text{“off”} \}$, $\Sigma = \{ \text{“action”} \}$, $\Gamma = (\epsilon)$, $I = (\text{“off”})$, $F = (\epsilon)$ and $\delta$ is described in Figure 5. D. Specific States and Messages 6LowPAN and CoAP make our node able to communicate with others, and to be dynamically configured. The use of FST is a simple way to express a sequence of logical actions. But in spite of these capabilities, our architecture is not really sensing and acting on the real world. This is only logic. To make a link between D-LITe and the real environment, we propose two Types of Messages and States: Real or Logical (Figure 2). Logical messages or states are useful for reasoning. For example, if we want a light not to switch off immediately when we push a button, we can introduce a logical state: “waiting” (see Figure 4). This state has no impact on the real world but merely means that someone starts the process of switching off. Then a Real State is used. “on” and “off” are such states. When the FST moves to them, the light really switches “on” or “off”. In Figure 4, rules explain that on receiving the time message while being in “waiting” state, the light really switches off by going in “off” state. Messages are treated the same way. Many of them are logical messages defined by the user to describe his logical steps in the sequence of actions. The others are real messages sent by the hardware (i.e. the sensing part of the sensor). For example, “Press” and “Release” (cf. Figure 3, 4 and 5) are Real Messages sent by hardware to its FST each time someone uses the switch’s button. The only interaction between D-LITe and hardware comes from the notion of Real Messages and Real States. That is why D-LITe is loosely coupled to hardware. Thus, in our FST $T(Q, \Sigma, \Gamma, I, F, \delta)$, few elements of $Q$ and $\Sigma$ are in contact with the real world. All Real Messages and Real States are detected during the discovery phase. In Figure 3, 4 and 5, Real States and Real Messages are written in green and bold. Table I <table> <thead> <tr> <th>variable</th> <th>value</th> <th>with</th> <th>description</th> </tr> </thead> <tbody> <tr> <td>order</td> <td>init</td> <td>state=xxx</td> <td>must initialize state to 'xxx'</td> </tr> <tr> <td>order</td> <td>rule</td> <td>Rule Message</td> <td>A rule the transducer must obey: see details below</td> </tr> <tr> <td>order</td> <td>link</td> <td>uri=[aa:bb::cccc]</td> <td>uri contains the IPv6 address of one observer</td> </tr> <tr> <td>input</td> <td>xxx</td> <td>Message service : a node (or the hardware) sends 'xxx' to this node</td> <td></td> </tr> </tbody> </table> Table II <table> <thead> <tr> <th>variable</th> <th>description</th> </tr> </thead> <tbody> <tr> <td>state=xxx</td> <td>if current state is &quot;xxx&quot;...</td> </tr> <tr> <td>msg=yyy</td> <td>if &quot;yyy&quot; message is received...</td> </tr> <tr> <td>Nstate=zzz</td> <td>then the node moves to &quot;zzz&quot; new state...</td> </tr> <tr> <td>Smsg=aaa</td> <td>...and sends &quot;aaa&quot; message to Observers.</td> </tr> </tbody> </table> D-LITE LANGUAGE (SALT) D-LITE is organized to allow the design and the deployment of applications depicted as a choreography of logical Finite State Transducers. D-LITE proposes a description language (SALT: Simple Application Logic description using Transducers) to configure nodes and allow them to communicate. A. SALT description On each D-LITE node, rule analyzer and communication features are installed. To describe his application’s logic, a user needs a language to: - Delete all settings, i.e. start a new application. - Set the Initial State, i.e. set FST’s starting state. - Express each Rule, i.e. describe the node’s FST. - Attach Observers, i.e. allow a node to send messages to a specific list of other nodes. There is also other needs. A node must be able to: - Describe itself, i.e. give its real messages/states during discovery phase. - Communicate with others, i.e. send messages to its observers, and receive messages from other nodes. B. SALT Messages format SALT uses a very simple textual form to express and fulfill all the above mentioned tasks. The use of this format instead of other standardised ones, such as JSON for instance, is motivated by the fact that the parser for standardised ones are usually heavy and could not fit the node’s memory limitation. Hence, we use a name=value form to limit bandwidth and memory consumption. Names and values should not be more than 6 characters long (on our Contiki implementation). The format used by SALT messages is described in table I. D-LITE’s FST is fully described by its initial state (order is set to init) and the set of rules (order is set to rule). Observers list is given by order link. As the choreography starts, messages are exchanged between nodes using input message. Rule’s Message is a one-liner (see table II) that gives a description of each FST’s transition. All states, input and output alphabets are deduced from the complete set of rules. C. SALT Usage SALT messages are exchanged between the end-user or a node and other nodes (Figure 2). We use CoAP [26], complying Internet standards, to have a small overhead and to be accessible from everywhere. Therefore, the Capillary Internet can be reached through CoAP. Table III shows a complete list of SALT messages that are sent to a node, using CoAP’s PUT method. D. CoAP Methods CoAP methods are used in D-LITE for following purposes: - DELETE : Clean FST, current state, and observers list. - GET : Obtain node’s description (i.e. the Real states/messages supported by hardware). - PUT : Give configuration’s orders to the node (i.e. Initial state, observers list, and the FST’s rules) (using SALT’s “order” messages). - POST : Messages service, to be managed by FST (using SALT’s “input” message). Figure 6. D-LITE Node’s services. Each node has its FST sent by the user, and receives and sends messages through standard network protocols. Table III <table> <thead> <tr> <th>SALT MESSAGES FOR A D-LITe NODE: AFTER RESETTING THE NODE, THE USER SETS ITS INITIAL STATE, ITS FST, AND ITS OBSERVER</th> </tr> </thead> <tbody> <tr> <td>order=init&amp;state=rlsd</td> </tr> <tr> <td>order=rule&amp;state=rlsd&amp;msg=push&amp;Nstate=prsd&amp;Smsg=up</td> </tr> <tr> <td>order=rule&amp;state=rlsd&amp;msg=push&amp;Nstate=rlsd&amp;Smsg=down</td> </tr> <tr> <td>order=link&amp;uri=[fe80::f0:1:303]</td> </tr> </tbody> </table> E. An Example: Simple Configuration of a Node Let us take the simple example of a switch controlling another device. In this case, the following SALT messages are exchanged with the switch’s node: First an end-user uses the GET CoAP’s method to retrieve information about the node, especially its real states and messages. After designing the choreography, the end-user broadcasts the logic to each node. He uses DELETE to clear all rules and current state on each node. Then, he sends (cf. Table III) initial state, all rules representing the FST, and Observers’ list using the PUT method. The switch is initialised in state “rlsd”. The two rules explain that on receiving the “push” message, the switch will alternately be “rlsd” or “prsd” (in this case, “push” is a real message, generated by hardware. “rlsd” and “prsd” are logical states, defined by the user). The latest order links the switch to node fe80::f0:1:303, which is the IPv6 address of the light controlled by this switch. This light (fe80::f0:1:303) must then be configured to react to messages sent by our node (not shown in this example). Each node is now ready, and the choreography can start. Nodes communicate with others using the POST method, receiving and sending messages following FST instructions, as shown in Figure 6. VI. IMPLEMENTATION AND VALIDATION A. Implementation on Netkit, Contiki, Cooja, and Telos B To test our architecture, we realized a simulation using Coapy [1] to implement the services offered by a D-LITe node. Nodes were simulated on a virtual network with Netkit 4. Our objective was to test our language. Once virtual nodes were collaborating under Netkit with Coapy, we decided to implement our code on Contiki [13]. Contiki offers 6LowPAN, CoAP and REST implementations, and runs on real nodes like TelosB or MicaZ. It comes with Cooja, a network simulator of emulated motes. Our D-LITe implementation (Figure 7) has been tested in the Cooja emulator and on real Nodes (TelosB). Coapy is used as a client to send commands from a PC. We wrote Scripts sending initial state, rules and observers list for each example presented in Section IV-C. We also use a Firefox’s plugin handling CoAP called Copper 5 to get values or send commands to nodes directly from a PC. D-LITe’s code uses 6LowPAN and CoAP API provided by Contiki (Figure 7). SALT messages decoding and FST rule’s analyzer is implemented in our D-LITe’s code. The binary size of D-LITe for a TelosB is 47KB. TelosB has 48KB program flash memory for storing programs and 10KB RAM for data. Programming TelosB is done by flashing the ROM through a USB connector. In our architecture, the software is divided into two parts. One is D-LITe framework (the fixed part) and the other is the FST’s description (the variable part written by users). We flash D-LITe once on a node, and store it in ROM. No more physical contact with the node will be required. Then, each user’s program (i.e. the FST’s description) is sent through the network, at any time, and stored in RAM. The FST is then executed by the D-LITe framework. All manipulated data are 6 characters long. A rule’s length is 24 Bytes (4 words of 6 characters, 2 states and 2 messages). Our implementation can handle up to 50 rules. Each Observer is stored in 16 Bytes (IPv6 address’s size). We planned a maximum of 20 Observers. These data describing FST’s behaviour represent 1526 Bytes of the 10KB RAM available [2]. B. Validation use case 1: a simple example This is the classical way to control an on/off device (Figure 3). We just want the switch to control light. FST’s details were shown before (cf. Section IV-C). 5 orders are used to configure the switch. The first one is a call to the DELETE CoAP method. The following 4 orders are sent with the PUT method (Table III): One to define the initial state, then the 2 rules, and finally the observer (the light). 4 orders are sent to the light device: the DELETE CoAP method to clean the FST, then the initial state (dark), and finally 2 rules (when receiving “on”, go to “light” state (which is a real state), and when receiving “off”, go to “dark” state (which is also a real state)). The whole code is sent in 9 CoAP packets. SALT messages The vision of the application is a choreography of FSTs, node supporting a small part of the overall application. Programming is based on nodes cooperation, each participating to explain that "press" and "released" messages generate the same “action” output and go to “nop” state. Each node receives an order to register the light as listener. Setting the light is as simple. FST is deleted, then initial state is set to “dark”, and 2 rules describe a flip-flop : “action” message changes state from “dark” to “light” and vice versa. This application uses 4 CoAP packets to configure the light. Each switch needs 5 packets to be set, 2 rules and 1 observer’s address for each switch represent 64 Bytes of node’s RAM. D. Validation use case 3 : using loss of semantic Dealing with more than two switches to control a light can be done with D-LITe. In that case, each node has just to send a signal to make the light change its state. If the light is on and someone presses the button, the light’s state needs to be changed. Switch’s former state and message’s type do not matter. Figure 5. After deleting FSTs in each node, each switch is initiated to state “nop”. 2 rules are sent to explain that “press” and “released” messages generate the same “action” output and go to “nop” state. Each node receives an order to register the light as listener. Setting the light is as simple. FST is deleted, then initial state is set to “dark”, and 2 rules describe a flip-flop : “action” message changes state from “dark” to “light” and vice versa. This application uses 4 CoAP packets to configure the light. Each switch needs 5 packets to be set, 2 rules and 1 observer’s address for each switch represent 64 Bytes of node’s RAM. VII. Conclusion D-LITe splits the application in two parts. A fixed part is installed once on each node physically (flashed on ROM for example). That part offers generic services and access. The second part is dynamically uploaded through the network. This one allows to describe the application’s logic using very simple textual form (i.e. SALT). This architecture gives D-LITe some advantages. Any changes is simple and fast to deploy. No physical access to a node is needed to completely re-adapt its behaviour. The logic is not very hard to describe. It uses a textual form. It is hardware independent. Programming is based on nodes cooperation, each participating node supporting a small part of the overall application. The vision of the application is a choreography of FSTs, exchanging messages and reacting to received ones. Even if the possibilities of FSTs are restricted, our architecture covers many usual IoT’s use cases. Our implementation on TelosB shows that D-LITe can run on constrained devices (48KB) (TelosB RAM’s size (10KB) can store up to 50 rules and 20 observers). The D-LITe framework is easy to access and operate by standard and well-known tools as it is based on IPv6 and REST. We are already using D-LITe to test applications, and see where it can be adapted. The main contribution of this paper was to show that it is possible to quickly and easily develop IoT applications in a standardized way, and quickly and easily spread them over any kind of hardware through the Capillary Internet. In the future, we will mainly work on how to improve the architecture, offer reliability, and give it some configuration automation. REFERENCES
{"Source-Url": "https://hal.science/hal-00693377/document", "len_cl100k_base": 8753, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33070, "total-output-tokens": 10886, "length": "2e13", "weborganizer": {"__label__adult": 0.0003955364227294922, "__label__art_design": 0.0005898475646972656, "__label__crime_law": 0.0003993511199951172, "__label__education_jobs": 0.0004580020904541016, "__label__entertainment": 0.00011730194091796876, "__label__fashion_beauty": 0.00020706653594970703, "__label__finance_business": 0.0003969669342041016, "__label__food_dining": 0.0004744529724121094, "__label__games": 0.0005946159362792969, "__label__hardware": 0.004535675048828125, "__label__health": 0.0006709098815917969, "__label__history": 0.00044083595275878906, "__label__home_hobbies": 0.00015687942504882812, "__label__industrial": 0.0009074211120605468, "__label__literature": 0.0002834796905517578, "__label__politics": 0.0004143714904785156, "__label__religion": 0.0006418228149414062, "__label__science_tech": 0.2337646484375, "__label__social_life": 9.816884994506836e-05, "__label__software": 0.01280975341796875, "__label__software_dev": 0.740234375, "__label__sports_fitness": 0.0003275871276855469, "__label__transportation": 0.0009641647338867188, "__label__travel": 0.0002560615539550781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41784, 0.02186]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41784, 0.76253]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41784, 0.89653]], "google_gemma-3-12b-it_contains_pii": [[0, 1031, false], [1031, 6058, null], [6058, 11711, null], [11711, 16165, null], [16165, 22123, null], [22123, 25147, null], [25147, 28754, null], [28754, 33423, null], [33423, 38499, null], [38499, 41784, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1031, true], [1031, 6058, null], [6058, 11711, null], [11711, 16165, null], [16165, 22123, null], [22123, 25147, null], [25147, 28754, null], [28754, 33423, null], [33423, 38499, null], [38499, 41784, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41784, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41784, null]], "pdf_page_numbers": [[0, 1031, 1], [1031, 6058, 2], [6058, 11711, 3], [11711, 16165, 4], [16165, 22123, 5], [22123, 25147, 6], [25147, 28754, 7], [28754, 33423, 8], [33423, 38499, 9], [38499, 41784, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41784, 0.09626]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6fd93852791df02af15b8e8f82c5020a7aae64f0
A BSP algorithm for the state space construction of security protocols Frédéric Gava, Michaël Guedj LACL, University of Paris-East Créteil, France Email: frederic.gava@univ-paris-est.fr, michael.guedj@univ-paris-est.fr Franck Pommereau IBISC, University of Évry, France Email: franck.pommereau@ibisc.univ-evry.fr Abstract This paper presents a Bulk-Synchronous Parallel (BSP) algorithm to compute the discrete state space of structured models of security protocols. The BSP model of parallelism avoids concurrency related problems (mainly deadlocks and non-determinism) and allows us to design an efficient algorithm that is at the same time simple to express. A prototype implementation has been developed, allowing to run benchmarks showing the benefits of our algorithm. 1. Introduction In a world strongly dependent on distributed data communication, the design of secure infrastructures is a crucial task. At the core of computer security-sensitive applications are security protocols, i.e., sequences of message exchanges aiming at distributing data in a cryptographic way to the intended users and providing security guarantees. This leads to search for a way to verify whether a system is secure or not. Enumerative model-checking is well-adapted to this kind of asynchronous, non-deterministic systems containing complex data types. In this paper, we consider the problem of constructing the state space of labelled transition systems (LTS) that model security protocols. Let us recall that the state space construction problem is the problem of computing the explicit representation of a given model from the implicit one. This space is constructed by exploring all the states reachable through a successor function from an initial state. Generally, during this operation, all the explored states must be kept in memory in order to avoid multiple exploration of a same state. Once the state space is constructed, or during its construction, it can be used as input for various verification procedures, such as reachability analysis or model-checking of temporal logic properties. State space construction may be very consuming both in terms of memory and execution time: this is the so-called state explosion problem. The construction of large discrete state spaces is so a computationally intensive activity with extreme memory demands, highly irregular behavior, and poor locality of references. This is especially true when complex data-structures are used in the model as the knowledge of an intruder in security protocols. Because this construction can cause memory crashing on single or multiple processor systems, it has led to consider exploiting the larger memory space available in distributed systems [1], [2]. Parallelize the state space construction on several machines is thus done in order to benefit from all the storage and computing resources of each machine. This allows to reduce both the amount of memory needed on each machine and the overall execution time. Distributed state space construction. One of the main technical issues in the distributed memory state space construction is to partition the state space among the participating machines. Most of approaches to the distributed memory state space construction use a partitioning mechanism that works at the level of states which means that each single state is assigned to a machine. This assignment is made using a function that partitions the state space into subsets of states. Each such a subset is then “owned” by a single machine. To have efficient parallel algorithms for state space construction, we see two requirements. First, the partition function must be computed quickly and defined such that a successor state is likely to be mapped to the same processor as its predecessor; otherwise the computation will be overwhelmed by inter-processor communications (the so called cross transitions) which obviously implies a drop of the computation locality and thus of the performances. Second, balancing of the workload is obviously needed [3] because it is necessary to fully profit from available computational power to achieve the expected speedup. In the case of state space construction, the problem is hampered by the fact that future size and structure of the undiscovered portion of the space space is unknown and cannot be predicted in general. While it has been showed that a pure static hashing for the partition function can effectively balance the workload and achieve reasonable execution time as well [4], this method suffers from some obvious drawbacks [5], [6]. First, it causes too much cross transitions. Second, if ever in the course of the construction just one processor is so burdened with states that it exhausts its available main memory, the whole computation fails or slows too much due to swapping. Verifying Security protocols. Designing security protocols is complex and often error prone: various attacks are reported in the literature to protocols thought to be “correct” for many years. These attacks exploit weaknesses in the protocol that are due to the complex and unexpected interleavings of different protocol sessions as well as to the possible interference of... malicious participants, i.e., the attacker. Furthermore, attacks are not as simple that they appear [7]: the attacker can generally be powerful enough [8] to perform a number of potentially dangerous actions as intercepting messages or replacing them by new ones using the knowledge it has previously gained; or it is able to perform encryption and decryption using the keys within its knowledge. Consequently the number of potential attacks generally grows exponentially with the number of exchanged messages. Formal methods offer a promising approach for automated analysis of security protocols: the intuitive notions are translated into formal specifications, which is essential for a careful design and analysis, and protocol executions can be simulated, making it easier to verify various security properties. Formally verifying security protocols is a well established domain that is still actively developed. Different approaches exist as [9], [10], [11] and tools are dedicated to this purpose as [12], [13]. **Contribution.** In this paper, we exploit the well-structured nature of security protocols and match it to a model of parallel computation called BSP [14], [15]. This allows us to simplify the writing of an efficient algorithm for computing the state space of finite protocol sessions. The structure of the protocols is exploited to partition the state space and reduce cross transitions while increasing computation locality. At the same time, the BSP model allows to simplify the detection of the algorithm termination and to load balance the computations. **Outline.** First, we briefly review in Section 2 the context of our work that is the BSP model, models of security protocols and their state space representation as LTS. Section 3 is dedicated to the description of our new algorithm constructed in a step-wise manner from a sequential one. Then, in Section 4, we briefly describe a prototype implementation and apply it to some typical protocol sessions, giving benchmarks to demonstrate the benefits of our approach. Related works are discussed in Section 5 while a conclusion and future works are presented in Section 6. 2. Context and general definitions 2.1. The BSP model In the BSP model, a computer is a set of uniform processor-memory pairs connected through a communication network allowing the inter-processor delivery of messages [14], [15]. Supercomputers, clusters of PCs, multi-core, GPUs, etc., can be considered as BSP computers. A BSP program is executed as a sequence of super-steps (see Fig. 1), each one divided into three successive disjoint phases: first, each processor only uses its local data to perform sequential computations and to request data transfers to other nodes; then, the network delivers the requested data; finally, a global synchronisation barrier occurs, making the transferred data available for the next super-step. The execution time (cost) of a super-step is the sum of the maximum of the local processing, the data delivery and the barrier times. The cost of a program is the total sum of the cost of its super-steps. On most of cheaper distributed architectures, barriers often become more expensive when the number of processors increases. However, dedicated architectures make them much faster and they have also a number of attractions. In particular, they dramatically reduce the risks of deadlocks or livelocks, since barriers do not create circular data dependencies. The BSP model considers communication actions en masse. This is less flexible than asynchronous messages, but easier to debug since there are many simultaneous communication actions in a parallel program, and their interactions are usually complex. Bulk sending also provides better performances since it is faster to send a block of data rather than individual data because of less network latency. 2.2. State spaces of protocol models A labelled transition system (LTS) is an implicit representation of the state space of a modelled system. It is defined as a tuple \((S, T, \ell)\) where \(S\) is the set of states, \(T \subseteq S^2\) is the set of transitions, and \(\ell\) is an arbitrary labelling on \(S \cup T\). Given a model defined by its initial state \(s_0\) and its successor function \(\text{succ}\), the corresponding explicit LTS is \(\text{LTS}(s_0, \text{succ})\), defined as the smallest LTS \((S, T, \ell)\) such that \(s_0\) in \(S\), and if \(s \in S\) then for all \(s' \in \text{succ}(s)\) we also have \(s' \in S\) and \((s, s') \in T\). The labelling may be arbitrarily chosen, for instance to define properties on states and transitions with respect to which model checking is performed. In the paper, we consider models of security protocols involving a set of agents and we assume that any state can be represented by a function from a set \(L\) of locations to an arbitrary data domain \(D\). For instance, locations may correspond to local variables of agents, shared communication buffers, etc. As a concrete formalism to model protocols, we have used an algebra of coloured Petri nets [16] allowing for easy and structured modelling. However, our approach is largely independent of the chosen formalism and it is enough to assume that the following properties hold: (P1) Any state of the system can be described as a function \(L \rightarrow D\). (P2) There exists a subset \(L_R \subseteq L\) of reception locations corresponding to the information learnt (and stored) by agents from their communication with others. (P3) Function \(\text{succ} \) can be partitioned into two successor functions \(\text{succ}_R \) and \(\text{succ}_L \) that correspond respectively to the successors that change states or not on the locations from \(L_R \). More precisely: for all state \(s \) and all \(s' \in \text{succ}(s)\), if \(s|L_R = s|L_R \) then \(s' \in \text{succ}_L(s)\), else \(s' \in \text{succ}_R(s)\); where \(s|L_R \) denotes the state \(s \) whose domain is restricted to the locations in \(L_R \). Intuitively, \(\text{succ}_R \) corresponds to transitions upon which an agent receives information and stores it. On concrete models, it is generally easy to distinguish syntactically the transitions that correspond to a message reception in the protocol with information storage. Thus, it is easy to partition \(\text{succ} \) as above. This is the case in particular for the algebra of Petri nets that we have used. In the following, the presented algorithms compute only \(S \). This is made without loss of generality and it is a trivial extension to compute also \(T \) and \(\ell \), assuming for this purpose that \(\text{succ}(s) \) returns tuples \((\ell(s, s'), s', \ell(s'))\). This is usually preferred in order to be able to perform model-checking of temporal logic properties. 2.2.1. Dolev-Yao attacker. We consider models of protocols where a Dolev-Yao attacker [8] resides on the network. An execution of such a model is thus a series of message exchanges as follows. (1) An agent sends a message on the network. (2) This message is captured by the attacker that tries to learn from it by recursively decomposing the message or decrypting it when the key to do so is known. Then, the attacker forges all possible messages from newly as well as previously learnt informations (i.e., attacker’s knowledge). Finally, these messages (including the original one) are made available on the network. (3) The agents waiting for a message reception accept some of the messages forged by the attacker, according to the protocol rules. 2.2.2. Sequential state space construction. In order to explain our parallel algorithm, we start with Algorithm 1 that corresponds to the usual sequential construction of a state space. The sequential algorithm involves a set \(\text{todo} \) of states that is used to hold all the states whose successors have not been constructed yet; initially, it contains only the initial state \(s_0 \). Then, each state \(s \) from \(\text{todo} \) is processed in turn and added to a set \(\text{known} \) while its successors are added to \(\text{todo} \) unless they are known already. At the end of the computation, \(\text{known} \) holds all the states reachable from \(s_0 \), that is, the state space \(S \). 3. A BSP algorithm for state space construction We now show how the sequential algorithm can be parallelised in BSP and how several successive improvements can be introduced. This results in an algorithm that remains quite simple in its expression but that actually relies on a precise use of a consistent set of observations and algorithmic modifications. We will show in the next section that this algorithm is efficient despite its simplicity. 3.1. A naive BSP version Algorithm 1 can be naively parallelised by using a partition function \(cpu \) that returns for each state a processor identifier, i.e., the processor numbered \(cpu(s) \) is the owner of \(s \). Usually, this function is simply a hash of the considered state modulo the number of processors in the parallel computer. The idea is that each process computes the successors for only the states it owns. This is rendered as Algorithm 2; notice that we assume that arguments are passed by references so that they may be modified by sub-programs. This is a SPMD (Single Program, Multiple Data) algorithm so that each processor executes it. Sets \(\text{known} \) and \(\text{todo} \) are still used but become local to each processor and thus provide only a partial view on the ongoing computation. So, in order to terminate the algorithm, we use an additional variable \(total \) in which we count the total number of states waiting to be proceeded throughout all the processors, i.e., \(total \) is the sum of the sizes of all the sets \(\text{todo} \). Initially, only state \(s_0 \) is known and only its owner puts it in its \(\text{todo} \) set. This is performed in lines 4–6, where \(\text{mypid} \) evaluates locally to each processor to its own identifier. Function \(\text{Successor} \) is then called to compute the successors of the states in \(\text{todo} \). It is essentially the same as the sequential exploration, except that each processor computes only the successors for the states it actually owns. Each computed state that is not owned by the local processor is recorded in a set \(\text{tosend} \) together with its owner number. This partitioning of states is performed in lines 7–11. Then, function \(\text{Exchange} \) is responsible for performing the actual communication between processors. The primitive \(\text{BSP}_{\text{EXCHANGE}} \) send each state \(s \) from a pair \((i, s)\) in \(\text{tosend} \) to the processor \(i \) and returns the set of states received from the other processors, together with the total number of exchanged states. The routine \(\text{BSP}_{\text{EXCHANGE}} \) performs a global (collective) synchronisation barrier which makes data available for the next super-step so that all the processors are now synchronised. Then, function \(\text{Exchange} \) returns the set of received states that are not yet known locally together with the new value of \(total \). Notice that, by postponing communication, Algorithm 2 Naive BSP construction 1: todo ← ∅ 2: total ← 1 3: known ← ∅ 4: if cpu(s₀) = mypid then 5: todo ← todo ∪ {s₀} 6: end if 7: while total > 0 do 8: tosend ← Successor(known, todo) 9: todo, total ← Exchange(known, tosend) 10: end while Successor(known, todo) : 1: tosend ← ∅ 2: while todo ≠ ∅ do 3: pick s from todo 4: known ← known ∪ {s} 5: for s′ ∈ succ(s) \ known do 6: if cpu(s′) = mypid then 7: todo ← todo ∪ {s′} 8: else 9: tosend ← tosend ∪ {(cpu(s′), s′)} 10: end if 11: end for 12: end while 13: return tosend Exchange(known, tosend) : 1: received, total ← BSP_EXCHANGE(tosend) 2: return (received \ known, total this algorithm allows buffered sending and forbids sending several times the same state. It can be noted that the value of total may be greater than the intended count of states in todo sets. Indeed, it may happen that two processors compute a same state owned by a third processor, in which case two states are exchanged but then only one is kept upon reception. Moreover, if this state has been also computed by its owner, it will be ignored. This not a problem in practice because in the next super-step, this duplicated count will disappear. In the worst case, the termination requires one more super-step during which all the processors will process an empty todo, resulting in an empty exchange and thus total = 0 on every processor, yielding the termination. 3.2. Increasing local computation time Using Algorithm 2, function cpu distributes evenly the states over the processors. However, each super-step is likely to compute very few states during each super-step because only too few computed successors are locally owned. This also results in a bad balance of the time spent in computation with respect to the time spent in communication. If more states can be computed locally, this balance improves but also the total communication time decreases because more states are computed during each call to function Successor. To achieve this result, we consider a peculiarity of the models we are analysing. The learning phase (2) of the attacker is computationally expensive, in particular when a message can be actually decomposed, which leads to recompose a lot of new messages. Among the many forged messages, only a (usually) small proportion are accepted for a reception by agents. Each such reception gives rise to a new state. This whole process can be kept local to the processor and so without cross-transitions. To do so, we need to design a new partition function cpu₂ such that, for all states s₁ and s₂, if s₁|LR = s₂|LR then cpu₂(s₁) = cpu₂(s₂). For instance, this can be obtained by computing a hash (modulo the number of processors) using only the locations from LR. On this basis, function Successor can be changed as shown in Algorithm 3. Algorithm 3 An exploration to improve local computation Successor(known, todo) : 1: tosend ← ∅ 2: while todo ≠ ∅ do 3: pick s from todo 4: known ← known ∪ {s} 5: for s′ ∈ succ(s) \ known do 6: todo ← todo ∪ {s′} 7: end for 8: for s′ ∈ succ₂(s) \ known do 9: tosend ← tosend ∪ {(cpu₂(s′), s′)} 10: end for 11: end while 12: return tosend The rest is as in Algorithm 2. With respect to Algorithm 2, this one splits the for loop, avoiding calls to cpu₂ when they are not required. This may yield a performance improvement, both because cpu₂ is likely to be faster than cpu and because we only call it when necessary. But the main benefits in the use of cpu₂ instead of cpu is to generate less cross transitions since less states are needed to be sent. Finally, notice that, on some states, cpu₂ may return the number of the local processor, in which case the computation of the successors for such states will occur in the next super-step. We show now on how this can be exploited. 3.3. Decreasing local storage One can observe that the structure of the computation is now matching closely the structure of the protocol execution: each super-step computes the executions of the protocol until a message is received. As a consequence, from the states exchanged at the end of a super-step, it is not possible to reach states computed in any previous super-step. Indeed, the protocol progression matches the super-steps succession. This kind of progression in a model execution is the basis of the sweep-line method [17] that aims at reducing the memory footstep of a state space computation by exploring states in an order compatible with progression. It thus becomes possible to regularly dump from the main memory all the states that cannot be reached anymore. Enforcing such an exploration order is usually made by defining on states a measure of progression. In our case, such a measure is not needed because of the match between the protocol progression and the super-steps succession. So we can apply the sweep-line method by making a simple modification of the exploration algorithm, as shown in Algorithm 3. Algorithm 4 Sweep-line implementation \[ \text{Exchange}(\text{tosend}, \text{known}) : \\ 1: \text{dump}(\text{known}) \\ 2: \text{return } \text{BSP\_EXCHANGE}(\text{tosend}) \\ \] The rest is as in Algorithm 3. Statement \text{dump}(\text{known}) resets \text{known} to an empty set, possibly saving its content to disk if this is desirable. The rest of function \text{Exchange} is simplified accordingly. 3.4. Balancing the computation As on can see in the benchmarks below, Algorithm 4 (and in the same manner Algorithm 3) can introduce a bad balance of the computations due to a lack of information when hashing only on \(L_R\). Thus, the final optimisation step aims at balancing the workload. To do so, we exploit the following observation: for all the protocols we have studied so far, the number of computed states during a super-step is usually closely related to the number of states received at the beginning of the super-step. So, before to exchange the states themselves, we can first exchange information about how many states each processor has to send and how they will be spread onto the other processors. Using this information, we can anticipate and compensate balancing problems. To compute the balancing information, we use a new partition function \(\text{cpu}_B\) that is equivalent to \(\text{cpu}_R\) without modulo, i.e., we have \(\text{cpu}_B(s) = \text{cpu}_R(s) \mod P\), where \(P\) is the number of processors. This function defines classes of states for which \(\text{cpu}_B\) returns the same value. We compute a histogram of these classes on each processor, which summarises how \(\text{cpu}_R\) would dispatch the states. This information is then globally exchanged, yielding a global histogram that is exploited to compute on each processor a better dispatching of the states it has to send. This is made by placing the classes according to a simple heuristic for the bin packing problem: the largest class is placed onto the less charged processor, which is repeated until all the classes have been placed. It is worth noting that this placement is computed with respect to the global histogram, but then, each processor dispatches only the states it actually holds, using this global placement. Moreover, if several processors compute a same state, these identical states will be in the same class and so every processor that holds such states will send them to the same target. So there is no possibility of duplicated computation because of dynamic states remapping. Algorithm 5 Balancing strategy \[ \text{Exchange}(\text{tosend}, \text{known}) : \\ 1: \text{dump}(\text{known}) \\ 2: \text{return } \text{BSP\_EXCHANGE}(\text{Balance}(\text{tosend})) \\ \] \[ \text{Balance}(\text{tosend}) : \\ 1: \text{histoL} \leftarrow \{(i, \sharp\{(i, s) \in \text{tosend}\})\} \\ 2: \text{compute } \text{histoG} \text{ from } \text{BSP\_MULTICAST}(\text{histoL}) \\ 3: \text{return } \text{BinPack}(\text{tosend}, \text{histoG}) \\ \] The rest is as in Algorithm 4, using \(\text{cpu}_B\) instead of \(\text{cpu}_R\). These operations are detailed in Algorithm 5 where variables \(\text{histoL}\) and \(\text{histoG}\) store respectively the local and global histograms, and function \(\text{BinPack}\) implements the dispatching method described above. In function \(\text{Balance}\), \(\sharp X\) denotes the cardinality of set \(X\). Function \(\text{BSP\_MULTICAST}\) is used so that each processor sends its local histogram to every processor and receives in turn their histograms, allowing to build the global one. Like any BSP communication primitive it involves a synchronisation barrier. It may be remarked that the global histogram is not fully accurate since several processors may have a same state to be sent. Nor the computed dispatching is optimal since we do not want to solve a NP-hard bin packing problem. But, as shown in our benchmarks below, the result is yet fully satisfactory. Finally, it is worth noting that if a state found in a previous super-step may be computed again, it would be necessary to known which processor owns it: this could not be obtained efficiently when dynamic remapping is used. But that could not happen thanks to the exploration order enforced in Section 3.2 and discussed in Section 3.3. Our dynamic states remapping is thus correct because states classes match the locality of computation. 4. Experimental results In order to evaluate our algorithm, we have implemented a prototype version in Python, using SNAKES [18] for the Petri net part (which also allowed for a quick modelling of the protocols, including the inference rules of the Dolev-Yao attacker) and a Python BSP library [19] for the BSP routines (which are close to an MPI “alltoall”). We actually used the MPI version (with MPICH) of the BSP-Python library. While largely suboptimal (Python programs are interpreted and there is no optimisation about the representation of the states in SNAKES), this prototype nevertheless allows and accurate comparison of the various algorithms. With respect to the presented algorithms, our implementations differ only on technical details (e.g., value total returned by \(\text{BSP\_EXCHANGE}\) is actually computed by exchanging also the number of values sent by each processor) and minor improvements (e.g., we used in-place updating of sets and avoided multiple computations of cpu(s) using an intermediate variable). The benchmarks presented below have been performed using a cluster with 20 PCs connected through a 1 Gigabyte Ethernet network. Each PC is equipped with a 2GHz Intel® Pentium® dual core CPU, with 2GB of physical memory. This allowed to simulate a BSP computer with 40 processors equipped with 1GB of memory each. These experiments are designed to reveal how various aspects of the new method contribute to the overall performance. Our cases study involved the following four protocols: 1) Needham-Schroeder (NS) public key protocol for mutual authentication. 2) Yahalom (Y) key distribution and mutual authentication using a trusted third party. 3) Otway-Rees (OR) key sharing using a trusted third party. 4) Kao-Chow (KC) key distribution and authentication. These protocols and their security issues are documented at the Security Protocols Open Repository (SPORE) [20]. For each protocol, we have built a modular model allowing for defining easily various scenarios involving different numbers of each kind of agents (but only one attacker, which is always enough). 4.1. Global performances Figure 2 shows the execution times for two scenarios for each protocol; the depicted results are fair witnesses of what we could observe from the large number of scenarios we have actually run. In the figure, the total execution time is split into three parts: the computation time (black) that essentially corresponds to the computation of successor states on each processor; the global and thus collective communication time (gray) that corresponds to states exchange; the waiting times (white) that occur when processors are forced to wait the others before to enter the communication phase of each super-step. Notice that because of the BSP model, these costs are obtained by considering the maximum times among the processors within each super-step, accumulated over the whole computation. We can see on these graphs that the overall performance of our last algorithm (right-most bars) is always very good compared to the naive algorithm (left-most bars). In particular, the communication and waiting times are always greatly reduced. This holds for large state spaces as well as for smaller ones. An important waiting time corresponds to an unbalanced computation: if some processors spend more time computing successors, the others will have to wait for them to finish this computation before every processor enters the communication phase. In several occurrences, we can observe that, by increasing the local computation, we have worsen the balance, which increased the waiting time. This corresponds to graphs where the middle part in the second column is taller than the same part in the left column. However, we can observe that our last optimisation to improve the balance, without introduce an overhead of communications, is always very efficient and results in negligible waiting time in every case. The variations of observed computation times are similarly caused by a bad balance because we depicted the accumulation of the maximum times among the processors. Finally, by comparing the left and right columns of results, we can observe that the overall speedup is generally better when larger state spaces are computed. This is mainly due to the fact that the waiting time accumulation becomes more important on longer runs. 4.2. Memory consumption By measuring the memory consumption of our various algorithms, we could confirm the benefits of our sweep-line implementation when large state spaces are computed. For instance, in the NS scenario with 5M states, we observed... an improvement of the peak memory usage from 97% to 40% (maximum among all the processors). Similarly, for the Y scenario with 1M states, the peak decreases from 97% to 60% (states in Y use more memory that states in NS). We also observed, on very large state spaces, that the naive implementation exhausts all the available memory and some processors start to use the swap, which causes a huge performance drop. This never happened using our sweep-line implementation. However, notice that, in all the presented scenarios, no swapping has occurred, which would have dramatically biased the results. Moreover, this led to nearly identical performances for Algorithms 3 and 4 which explains why we presented only the latter. 4.3. Scalability As a last observation about our algorithm, we would like to emphasise that we observed a linear speedup with respect to the number of processors. In general, most parallel algorithms suffer from an amortised speedup when the number of processors increases. This is almost always caused by the increasing amount of communication that becomes dominant over the computation. Because our algorithm is specifically dedicated to reduce the number of cross transitions, and thus the amount of communication, this problem is largely alleviated and we could observe amortised speedup only for very small models (less than 100 states) for which the degree of intrinsic parallelism is very reduced but whose state space is in any way computed very quickly. 5. Related works Distributed state space construction has been studied in various contexts. All these approaches share a common idea: each machine in the network explores a subset of the state space. This procedure continues until the entire state space is generated and no messages are sent anymore [4]. To detect this situation a termination detection procedure is usually employed. However, these approaches differ on a number of design principles and implementation choices, e.g., the way of partitioning the state space using either static hash functions or dynamic ones that allow dynamic load balancing, etc. In this section, we focus on some of these technics and discuss their problems and advantages. More references can be found in [5]. In [21], a distributed state space exploration algorithm derived from the Spin model-checker is implemented using a master/slave model of computation. Several Spin-specific partition functions are experimented, the most advantageous one being a function that takes into account only a fraction of the state vector, similarly to our function cpus. The algorithm performs well on homogeneous networks of machines, but it does not outperform the standard implementation except for problems that do not fit into the main memory of a single machine. Moreover, no clue is provided about how to correctly choose the fraction of states to consider for hashing, while we have relied on reception locations from $L - R$. In [6] various technics from the literature are extended in order to avoid sending a state away from the current processor if its 2nd-generation successors are local. This is complemented with a mechanism that prevents re-sending already sent states. The idea is to compute the missing states when they become necessary for model-checking, which can be faster than sending it. That clearly improves communications but our method achieves similar goals, in a much simpler way, without ignoring any state. There also exist approaches, such as [22], in which parallelization is applied to “partial verification”, i.e. state enumeration in which some states can be omitted with a low probability. In our project, we only address exact, exhaustive verification issues. For the partition function, different technics have been used. In [4] authors used of a prime number of virtual processors and map them to real processor. This improves load balancing but has no real impact on cross transitions. In [23], the partition function is computed by a round-robin on the successor states. This improves the locality of the computations but can duplicate states. Moreover, it works well only when network communication is substantially slower than computation, which is not the case on modern architectures for explicit model-checking. In [24], an user defined abstract interpretation is used to reduce the size of the state space and then it allows to distribute the abstract graph; the concrete graphs is then computed in parallel for each part of the distributed abstract graph. In contrast, our distribution method is fully automated and does not require input from the user. There are many tools dedicated to the modelling and verification of security protocols as [25], [9], [10], the most well known is certainly AVISPA [12]. In contrast, our approach is based on a modelling framework (algebras of Petri nets) with explicit state space construction, that is not tight to any particular application domain. Our approach however, relies on the particular structure of security protocols. We believe that our observations and the subsequent optimisations are general enough to be adapted to the tools dedicated to protocol verification: we worked in a very general setting of LTS, defined by an initial state and a successor function. Our only requirements are three simple conditions (P1 to P3) which can be easily fulfilled within most concrete modelling formalisms. 6. Conclusion and future works The critical problem of state space construction is determining whether a newly generated state has been explored before. In a serial implementation this question is answered by organizing known states in a specific data-structure, and looking for the new states in that structure. As this is a centralized activity, any parallel or distributed solution must find an alternative approach. The common method is to assign states to processors using a static partition function which is generally a hashing of the states [4]. After a state has been generated, it is sent to its assigned location, where a local search determines whether the state already exists. This leads to two main difficulties. First the number of cross transitions is too high, leading to a too heavy network use. Second, memorising all the states in the main memory is impossible without crashing the whole computation and is not clear when it is possible to dump some states in disk and if heuristics like those in [21], [2] would work well for complex protocols. Our first solution is to use the well-structured nature of security protocols to choose which part of the state is really needed for the partition function and to empty the data-structure in each super-step of the parallel computation. Our second solution entails automated classification of states and dynamic mapping of classes to processors. We find that both our methods execute significantly faster and achieve better network use than a classical method. Furthermore, we find that our method to balance states does indeed achieve better network use, memory balance and runs faster. The fundamental message is that for parallel discrete state space construction, it is essential to exploit characteristics of the models and to structure the computation accordingly. We have demonstrated techniques that prove the feasibility of this approach and demonstrate its potential. Key elements to our success were (1) an automated states classification that reduces cross transitions and memory footprint, while improving the locality of computation (2) using global barriers (which is a low-overhead method) to compute a global remapping of states and thus improve balancing workload, achieving a good scalability. Future works will be dedicated to build a real and efficient implementation from our prototype. It will feature in particular a temporal logic model-checker, allowing to verify more than reachability properties. Using this implementation, we would like to run benchmarks in order to compare our approach with existing tools. We would like also to test our algorithm on parallel computer with more processors in order to confirm the scalability that we could observe on 40 processors. Moreover, we are working on the formal proof of our algorithm. Proving a verification algorithm is highly desirable in order to certify the truth of the diagnostices delivered by such an algorithm. Such a proof is possible because, thanks to the BSP model, our algorithm remains simple in its structure. Finally, we would like to generalise our present results by extending the application domain. In the security domain, we will consider more complex protocols with branching and looping structures, as well as complex data types manipulations. In particular, we will consider protocols for secure storage distributed through peer-to-peer communication [26]. Another generalisation will be to consider symbolic state space representations, in particular those based on decision diagrams. References
{"Source-Url": "http://lacl.u-pec.fr/gava/papers/pdmc_2010_gava_guedj_pommereau.pdf", "len_cl100k_base": 8321, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29431, "total-output-tokens": 10424, "length": "2e13", "weborganizer": {"__label__adult": 0.00044345855712890625, "__label__art_design": 0.0004198551177978515, "__label__crime_law": 0.0009975433349609375, "__label__education_jobs": 0.0007543563842773438, "__label__entertainment": 0.00011020898818969728, "__label__fashion_beauty": 0.0002040863037109375, "__label__finance_business": 0.0004286766052246094, "__label__food_dining": 0.0004291534423828125, "__label__games": 0.0008549690246582031, "__label__hardware": 0.0025844573974609375, "__label__health": 0.0009584426879882812, "__label__history": 0.0004169940948486328, "__label__home_hobbies": 0.0001633167266845703, "__label__industrial": 0.0009627342224121094, "__label__literature": 0.0003056526184082031, "__label__politics": 0.0005259513854980469, "__label__religion": 0.0007085800170898438, "__label__science_tech": 0.2939453125, "__label__social_life": 0.00011771917343139648, "__label__software": 0.012786865234375, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0003490447998046875, "__label__transportation": 0.0008420944213867188, "__label__travel": 0.00024771690368652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43787, 0.03332]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43787, 0.5469]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43787, 0.91326]], "google_gemma-3-12b-it_contains_pii": [[0, 5211, false], [5211, 10688, null], [10688, 16415, null], [16415, 20775, null], [20775, 26684, null], [26684, 30366, null], [30366, 36363, null], [36363, 43787, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5211, true], [5211, 10688, null], [10688, 16415, null], [16415, 20775, null], [20775, 26684, null], [26684, 30366, null], [30366, 36363, null], [36363, 43787, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43787, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43787, null]], "pdf_page_numbers": [[0, 5211, 1], [5211, 10688, 2], [10688, 16415, 3], [16415, 20775, 4], [20775, 26684, 5], [26684, 30366, 6], [30366, 36363, 7], [36363, 43787, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43787, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f7297c16b9aa15337c25dc5601984a2107ab28a5
Industrial Productivity Solutions Guide Adding a sixth sense to your industrial machines Introduction About This Guide Prerequisites for Following the Tutorial Steps in This Guide The Impulse Data DSP — Feature Engineering at the Edge Machine Learning Model Creating the Impulse — Step-by-Step Guide Creating the Impulse — Putting it All to the Edge Deploying the Impulse — Putting it All to the Edge Automation EDGE IMPULSE Industrial Productivity Solutions Guide Collaboration Summary F.A.Q. What are the minimum hardware requirements to run the Edge Impulse inferencing library on my embedded device? What frameworks does Edge Impulse use to train the machine learning models? What engine does Edge Impulse use to compile the Impulse? Is there a downside to enabling the EON Compiler? Can I use a model that has been trained elsewhere in Edge Impulse? How does the Feature Explorer visualize data that has more than three dimensions? What is the typical power consumption of the Impulse running on my device? What is the .eim model format for Edge Impulse for Linux? How is the labeling of the data performed? About the Author EDGE IMPULSE Industrial Productivity Solutions Guide Introduction Edge Impulse is the leading software platform that helps companies build and deploy real machine learning (ML) applications at the edge. Building production-grade and edge-ready ML pipelines with Edge Impulse helps enterprises unlock more value than ever by converging information technology (IT) and operational technology (OT) in the industrial space. We accelerate time to market, improve ML outcomes, and de-risk on-device deployment. **Edge Impulse** (noted as EI throughout this document) offers a comprehensive solution that caters to various stages of a standard machine learning (ML) pipeline. It resembles the toolkit of a skilled architect designed for building skyscrapers. Just as an architect starts with a foundation, builds upwards, adjusts to challenges, and integrates new designs and technologies over time, EI provides the tools to start, adapt, and refine ML solutions, ensuring they’re robust, current, and optimized for your data. With a focus on data collection, cleaning, feature extraction, model training, testing, and deployment, Edge Impulse ensures that users can access the necessary tools at each step. Users can efficiently manage the entire ML workflow, from data ingestion to model deployment, by seamlessly connecting these components through a unified software platform. Beyond its integrated approach, Edge Impulse provides APIs and SDKs, enabling users to customize their workflows and integrate with external tools when needed. Moreover, Edge Impulse acknowledges the importance of collaboration, incorporating collaboration tools while maintaining a strong focus on security and compliance, which is particularly relevant for teams working in the industrial productivity domain. With these features, Edge Impulse empowers professionals of any ML expertise to build and deploy ML models to the edge effectively. Upcoming sections of this guide delve deeper into the features and capabilities that Edge Impulse offers to aid understanding and provide insight on using it effectively for industrial machine health and productivity ML applications. About This Guide This guide serves as a reference example of using the Edge Impulse platform for building an edge machine learning (ML) application to enable an industrial predictive maintenance use case. It is designed as a tutorial, providing descriptions and examples for each part of building an Edge ML application, namely: - **Data Collection, Cleaning, and Transformation** - **Digital Signal Processing (DSP)** — i.e., Feature Engineering - **ML Model Construction** — how to choose, train, test and tune - **Deployment** of the resulting inference library on any target device In addition to the Edge ML context for the abovementioned parts, the guide introduces a notion of an **Impulse** — which Edge Impulse defines as a processing pipeline including DSP and ML model. This is followed by the **Data** section that provides a real-world example dataset to demonstrate how to work with data in the Edge Impulse platform and illustrate an example industrial use case. The section **Creating the Impulse — Step by Step Guide** follows with a sequential explanation of all the actions necessary to create a complete Edge ML pipeline — from importing a dataset to training and evaluating a machine learning model. An **impulse** is the fundamental part of the Edge Impulse platform — it represents a pipeline consisting of feature engineering and a machine learning model that is deployed to the edge device. The section **Deploying the Impulse — Putting it All to the Edge** provides instructions on deploying the resulting ML inference pipeline to an edge device. You are encouraged to follow through — all the resources, including a sample dataset, are **publically available** and referenced throughout this guide. Everything described in the guide (and more) can also be performed programmatically through our **APIs** or **Python SDK**. In addition, links to relevant articles in our comprehensive **documentation portal** are provided for an in-depth understanding. There, one can find more information about each of the steps and features covered in this guide, alongside other helpful information. Prerequisites for Following the Tutorial Steps in This Guide - Clone a GitHub repository github.com/edgeimpulse/industrial-solution-guide It contains the code that is demonstrated for some of the features (section Pre-Processing and Transformation), as well as a copy and a link to the publically accessible dataset used in this guide. - Become a part of an Edge Impulse organization — if you are not a paying customer yet you can get access to an organization as part of enterprise trial studio.edgeimpulse.com/trial-signup - Set up an AWS S3 bucket (to be able to work with organizational dataset features) - Create a project in the Edge Impulse platform, or copy a pre-made project that already has everything set up. The Impulse The Edge Impulse platform allows one to create an Impulse — a machine learning pipeline that will eventually run on the target device. A combination of an impulse and a dataset constitutes a project. An impulse consists of “blocks” representing the steps in the ML pipeline: - **Input data block:** Creating a training dataset and selecting applicable input parameters, such as window size and the sampling frequency for time series, or resizing resolution for images - **Processing block:** Selecting the DSP algorithm, tuning the algorithm parameters, and generating features from input data - **Learning block:** Selecting and tuning the machine learning model, and **training / retraining the model** using the features generated by the DSP block. ![Figure 1: Overview of the impulse](image) The Edge Impulse platform provides all the infrastructure necessary for each step of Impulse creation, including **CPU and GPU compute** for model training and DSP feature generation, AutoML tools such as **EON Tuner**, as well as **graphs and visualizations** for evaluating the performance, and other advanced features. For each block type, Edge Impulse has already developed a large set of processing algorithms and ML models that can be selected for composing an Impulse. Users can extend the set of blocks available to compose an impulse in the platform. This can be achieved via Custom Processing Blocks and Custom Learning Blocks. Custom blocks are helpful if the user has an existing DSP algorithm implementation or an ML model architecture created outside of Edge Impulse that is specific to some sensor data or a particular use case. Once the impulse is created, the whole pipeline can be deployed to a target device. The Deployment step allows to generate a C++ library that contains the version of the impulse highly optimized specifically for inference on the target device or gateway that is used in your project. Further sections cover each step of creation and deployment separately. **On-Device Performance Estimation** As mentioned above, each block of the resulting impulse will eventually run on the edge device. Therefore it is valuable to be able to have an idea about how each step of the pipeline will perform once deployed as early as possible to avoid spending time on algorithms that might not fit the selected device. To be able to see these estimations, select the target device in the “Dashboard” section of your project (Project Info pane). Now, any time there is any modification to processing and learning blocks the live performance metrics estimations will be updated. Metrics include latency, memory usage, and storage requirements and are visible on the respective blocks’ pages. A total estimation of the whole impulse will also be provided on the deployment page at the final step of the impulse creation. **Figures 3a and 3b** show estimation for DSP and ML model blocks of an impulse for the target selected for this project — Nordic nRF5340 DK. Data The process of creating a machine learning model begins with data. The data may originate from various devices or other sources (prototype devices being developed vs industrial-grade reference devices), have different formats (excel sheets, images, CSV, JSON, etc...), and are stored in various places (researchers’ computers, Dropbox folders, Google Cloud Storage, S3 buckets, etc...). Data in Edge Impulse can exist in two places: Organization and Project. Edge Impulse’s Organizational Data component (part of the data acquisition pipeline) is designed to import data from various sources and plays a role similar to the feature store of an organization. Data can be organized into different datasets that can be reused across various projects and transformation experiments. This facilitates the collection of diverse, real-world data (an in-house digital asset for your organization) to train robust models. Additionally, users can collect and store training and test data directly in a Project, alongside their model and deployment code. Rather than relying on prebuilt datasets or requiring users to construct their own data-gathering technology, Edge Impulse offers a variety of data ingestion methods. Organizational data should be the default way to start an Edge ML project. However, for quicker experiments, it is okay to use project data directly. Figure 4 illustrates an example setup of dataflow in an organization, showcasing different ways the data can be imported into the project. Data features are described in more detail in the section Data. About the Dataset Building a dataset is a key part of the ML process. One of the key values that Edge Impulse provides are its straightforward yet powerful tools for data collection, data labeling, and transformation. Please consult our docs outline how to begin creating your own datasets in Edge Impulse. In this guide, we will showcase the features of Edge Impulse by using an open dataset for sensorless AC motor drive diagnostics. The dataset can be accessed here, as well as from the GitHub repository accompanying this guide. This dataset consists of samples of measured electric current during the operation of a sensorless AC motor drive. Samples are recorded for 11 sessions of motor operation. In each session, the drive had various intact and defective components. This results in 11 different sample classes, where each sample belongs to one of 11 classes. Each class consists of samples gathered under different operating speeds, load moments, and load forces. The current signals are measured with a current probe and an oscilloscope on two phases. The dataset is presented as a directory with 11 subdirectories, each corresponding to one condition class. For each class, there are eight samples or time series recordings of two axes of alternating current (AC). Each of these recordings is a 9000ms array of time series AC measurements sampled at 100000 Hz. The recordings are in a generic .txt format that is often used when collecting data in industrial settings. Subsection Pre-Processing and Transformation shows a process of converting this format to CSV, which is more commonly used for time series data with ML tasks. Throughout this guide, this dataset will be used to build a machine learning pipeline to classify the motor failure based on the AC data. It can then be deployed to the edge device — for this application, it could be an MCU that is attached to the motor drive and has an AC sensor as an input. **Setting Up an Organization Dataset** An organization dataset can be imported from several locations, including an AWS S3 bucket/Google Cloud Storage that might contain raw data. **External S3 Bucket** To link an S3 bucket to an organization, select the “Data” tab in the organization view, select “Buckets” view, and click “+ Add new bucket” in the top right corner. ![Figure 5: S3 bucket setup form](image) Then fill in “my-raw-data-bucket” in the “Bucket” field, and “eu-west-1” in “Region.” Fill in the bucket access key and secret key, and press “Add bucket.” Now, when the S3 bucket is connected, it can be used to create an **Organization Dataset** from one of the directories that the bucket has. ![Figure 6: Organization Dataset setup form](image) Locate the “Datasets” view and click “+ Add new dataset.” Select “clinical” type (this dataset type works well with automated pipelines covered later in this guide), give the dataset a name, and provide the bucket path where the data is located. The bucket can have several folders, each containing a different dataset, and can be mapped to the Edge Impulse datasets of the bucket structure. Click “Add dataset.” After the job is complete, a dataset explorer will appear that contains the same file structure as the directory previously linked to the bucket. Now, this dataset can be used to clean, transform, and import data across the Edge Impulse platform. The status of the buckets' connections can always be checked with a health indicator, as shown by the green indicator on the “Buckets” tab. Figure 7: Storage buckets list after a bucket is successfully added Figure 8: Directory structure in the AWS S3 bucket Upload Portal An Upload Portal is a secure way to allow external parties to upload data to your datasets. It provides an easy user interface for adding data without giving access to the content of the dataset or the ability to delete any files. Data uploaded through the portal can be stored on-premise or in your own cloud infrastructure. Upload portals are particularly useful for collecting data from external sources directly into your storage buckets, facilitating the use of this data in Edge Impulse for further processing. Importing Data into the Project Sometimes it is necessary to add test data to only the current project being worked on without going through a feature store and transformation pipeline. This may be useful to quickly test a small subset of samples to test some initial hypothesis. Below are the options to achieve this. CSV Wizard The most common format for importing time-series data is CSV. In case the samples stored on the machine are already in CSV format, a template can be defined by which all the rows will be imported by using one representative sample and setting up the column semantics. This feature is called CSV Wizard and can be accessed from the data upload page of a project. Direct Ingestion from Device Two most common ways to upload data to the Edge Impulse platform directly from an edge target device are: 1. Using the **Data Forwarder**: This method is easy to use if the device can output the data it samples to the serial output. Connect the Edge Impulse CLI tool to the serial port where the computer is receiving the device output, and the samples will be forwarded to the Edge Impulse ingestion service and arrive in the project. 2. Using the **Edge Impulse C ingestion SDK**: This method enables the user to program the device firmware to send the samples to the Edge Impulse ingestion service itself. This is an advanced method that is applicable if the device can directly connect to the internet, but it can also unlock use cases such as active learning to continue collecting raw data samples from devices that are deployed in the field. Pre-Processing and Transformation **Transformation blocks** are a very flexible tool that can be leveraged as part of the organizational features that the Edge Impulse platform offers. They can be used for most advanced data transformation use cases. Transformation Blocks can take raw data from the organizational datasets and convert the data into files that can be loaded into an Edge Impulse project/another organizational dataset. **Transformation blocks** can be used as part of automated **organization pipelines** and **project pipelines** to automate the processes. Transformation blocks can fetch external datasets, augment/create variants of the data samples, extract metadata from config files, create helper graphs, align and interpolate measurements across sensors, remove duplicate entries, and more. Transformation blocks can be written in **any programming language** that can be executed in a containerized environment, such as Docker. When deployed, they run on the Edge Impulse platform infrastructure. The most common language used for building a transformation block is **Python**. Here is an example of a transformation block used to transform the sensorless AC dataset to a JSON format that the Edge Impulse project can work with. The **Edge Impulse CLI** tools are required to set up and upload the transformation block to the organization. ``` industrial-solution-guide/transform-ac-motor-fault-detection-data on / main [?] via 2.3.10.9 on v20.10.22 via industrial-solution-guide via @base took 36.3s edge-impulse-blocks init Edge Impulse Blocks v1.22.0 (node:7098) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead. (Use `node --trace-deprecation ...` to show where the warning was created) ? In which organization do you want to create this block? Ivan Demo Org Attaching block to organization 'Ivan Demo Org' ? Choose a type of block Transformation block ? Enter the name of your block ac-sensorless-txt-to-ei-json ? Enter the description of your block This block takes the files in the format published in the "Dataset for Sensorless Drive Diagnosis" and transforms it into EI json format ? What type of data does this block operate on? Data item (--in-directory passed into the block) ? Which buckets do you want to mount into this block (will be mounted under /mnt/s3fs/BUCKET_NAME, you can change these mount points in the Studio)? Your new block 'ac-sensorless-txt-to-ei-json' has been created in '/Users/ivan/ei-solutions/industrial-solution-guide/transform-ac-motor-fault-detection-data'. When you have finished building your transform block, run 'edge-impulse-blocks push' to update the block in Edge Impulse. ``` **Figure 10:** Uploading a transformation block through Edge Impulse CLI The source files for this custom block are located in the [repository](#) accompanying this guide. **Figure 11:** Custom blocks list in organization after adding a new block DSP — Feature Engineering at the Edge Digital signal processing (DSP) is the practice of using algorithms to manipulate streams of sensor data. When paired with embedded machine learning, it is common to use DSP to extract, modify, or generate signals before feeding them into machine learning models. A few reasons to apply DSP are: - Cleaning up a noisy signal - Removing spikes or outlying values that might be caused by hardware issues - Extracting the most important information from a signal - Transforming the data from the time domain to the frequency domain In Edge Impulse, DSP pre-processing is added to the impulse using the Processing Block. It can be configured to use one of many algorithms that Edge Impulse has already implemented, and each of them can fit a specific purpose better. For example: - **Flatten**: This is the simplest block that extracts statistical features from a time-series sample, such as Max, Min, Standard Deviation, RMS, etc. - **Spectral Features**: This block extracts frequency, power, and other characteristics of a signal. Low-pass and high-pass filters can also be applied to filter out unwanted frequencies. It is great for analyzing repetitive patterns in a signal, such as movements or vibrations from an accelerometer. **Multiple DSP blocks** can be selected for one impulse — in which case raw data will be passed independently through all of them, and their outputs will be combined to serve as inputs to the machine learning block. In case there exists a DSP algorithm that is specific to some sensor data and use case, and it is not present in Edge Impulse, it is possible to create a custom processing block with the code of a custom algorithm and use it as part of the impulse. The Edge Impulse team is available to assist in this process. Machine Learning Model The machine learning model takes features generated by the processing block as an input and outputs a result. The type of result depends on the problem to solve and the application being built. Some examples of machine learning algorithm types include: - **Classification**: Classification algorithms try to solve the problem of distinguishing between various *types*, or *classes*, of things. This could mean, based on microphone input, determining if the sound is coming from a fan, a chainsaw, or a pump. Classification models output a score from 0 to 1 that represents how confident the model is that a given sample belongs to each of the classes. - **Regression**: Regression algorithms predict numerical values based on input features. Common use cases include estimating temperature based on historical data or estimating the motor speed based on the video feed of the motor spinning. - **Object detection:** This set of algorithms locates the objects of interest on the provided images. One common output format is bounding boxes — the location and size of a “box” where the object is located on the image, with a confidence score for each bounding box as in classification models. A machine learning model goes through two phases of lifecycle: **training and inference**. Training is a computationally intensive task — e.g., a convolutional neural network model is shown labeled data, and its weights are adjusted in a process called “backpropagation.” This is repeated many times until the model learns to get the right label accurate enough. Due to the intense computing power and infrastructure required, model training is performed in the cloud through the Edge Impulse platform. Next, the model will be deployed. After that, it will be working in “forward pass” mode — in a process called “inference.” An optimized inference engine and a trained model are what gets deployed to the edge device. **Training** Model training is the process where the machine learning model learns to recognize patterns, correlations, and relationships in the input data. Edge Impulse model training happens in the Learning Block included in the impulse. A **learning block** is the step of the pipeline that describes the model architecture and training parameters at train time (in the cloud) and performs model inference at inference time (on the target device or gateway). You can use a pre-existing model architecture (e.g. YOLOv5 or ResNet for object detection), or build your own **custom learning block**. **Testing and Evaluation** After the model is trained, it can be tested in the Edge Impulse platform. Testing means running an inference over a set of samples that were not used during the training and evaluating the performance metrics. Model testing can be accessed from any project through a dedicated tab in the left-side menu. Automated Machine Learning with EON Tuner The **EON Tuner** is Edge Impulse’s AutoML (automated machine learning) tool designed to find and select the best embedded machine learning model for a given application within the constraints of your target device. It performs end-to-end optimization of the combination of a DSP algorithm and a machine learning model, finding the **ideal trade-off** between these two blocks to achieve optimal performance on the given target hardware. ![EON Tuner](image) *Figure 13: EON Tuner’s best suggested impulse configurations after a run completion* The advantages of using the EON Tuner include: 1. **Optimization for Target Devices**: It analyzes performance directly on any device fully supported by Edge Impulse, allowing for optimizations specific to the device's hardware capabilities. 2. **Support for Various Task Categories**: The tuner supports different types of sensor data, including motion, images, and audio, optimizing for common applications or task categories within these data types. 3. **Comprehensive Evaluation**: It evaluates different configurations for creating samples from your dataset, tests various parameters and configurations of processing blocks, and evaluates different model architectures, hyper-parameters, and data augmentation techniques. 4. **Flexibility in Configuration**: Users can define the EON Tuner Search Space to constrain the tuner to use steps defined by hardware, customer requirements, or internal knowledge, offering flexibility in meeting specific project needs. **EON Tuner** can be accessed from any project from the left side menu. Creating the Impulse — Step-by-Step Guide With the knowledge about the data and all the building blocks of the impulse and the project, let’s create an ML pipeline that can be deployed to an edge device. Step 0: Import Data to Your Project As mentioned in section “Data,” there are several ways to import the data into the project. To import the “.txt” formatted AC motor dataset, the transformation block created in section “Pre-Processing and Transformation” will be used. Navigate to the organization page and select the “Data Transformation” page. ![Figure 14: Configuring a transformation job to import the data from an organization dataset into the project](image) **Figure 14** illustrates the configuration options for this step — select the transformation block created earlier, and the project to import the data to. Press “Start transformation job.” Once the job is complete, navigate to the project data page — the samples should be imported. **Step 1: Select All the Blocks That Will be Part of Your Impulse** Once the project has data, it’s time to select the blocks that the impulse will consist of. **Figure 15** illustrates what the impulse will look like. ![Figure 15: Configured Impulse view](image) **Processing Block** Press “Add a processing block” to open a list of available processing blocks. For this project, we will select two blocks, so first add a “Spectral Analysis” block, then press “Add a processing block” once again and add a “Flatten” block. The list of input axes for each of the processing blocks corresponds to the format of the data that is in the project dataset — as mentioned before, each sample contains two axes of AC. Learning Block Press “Add a learning block” to open a list of available processing blocks. For this project, we will use a LightGBM technique (short for Light Gradient-Boosting Machine) — a highly efficient gradient boosting algorithm that typically works well with low dimensional inputs and is more efficient than deep learning when working with DSP blocks like Flatten and Spectral Features. Step 2: Generate Features Both of the selected processing blocks can be configured using relevant parameters, after which feature generation will happen. This means that all the raw data samples in the training and testing sets will be put through the selected processing algorithms, and the generated features will represent these samples as inputs to the ML algorithm. In this case, LightGBM. Spectral Features Navigate to the “Spectral features” tab in the Impulse design section of the project menu. There are a lot of parameters that this processing block takes. The DSP Autotuner makes it easy to automatically fine tune the processing block parameters. With one click of a button, the autotuner looks at the entire dataset and recommends a set of parameters tuned to make the most out of the dataset. Figure 16: “Spectral features” processing block page Click **“Autotune parameters”** to start the autotuner job. After it’s completed, press **“Save parameters”** to advance to a feature generation screen. Press **“Generate features”** to initiate the feature generation job. ![Figure 17: View of features generated from the “Spectral features” block](image) Once the job is complete, notice the message **“Job completed”** in the job log alongside the feature explorer. This visualization illustrates how the samples are represented in the feature space of the generated parameters. **Flatten** Navigate to the **“Flatten”** tab in the Impulse design section of the project menu. Looking at the raw data window, this block generates seven features that are statistical measures of the signal, namely: Average, Minimum, Maximum, RMS, Standard Deviation, and Kurtosis. There are no parameters for these features except for the set to be selected hence there is no Autotuner option. Select all of the features, then press “Save parameters” to advance to a feature generation screen. Press “Generate features” to kick off the feature generation job. When the job is complete, you will see the message “Job completed” in the job log, alongside a feature explorer — a visualization that illustrates how the samples are represented in the feature space of the generated parameters. **Step 3: Train the Machine Learning Model** After the features are generated, they are ready to be used for model training. Navigate to the “LGBM Random Forest” tab. You’ll be presented with a screen with model hyperparameters that will be used for training. It is a good rule of thumb to start with default parameters and train the model to get a baseline performance. After that, parameters like the number of training iterations can be adjusted. EON Tuner can be used for that, as described in a section above (Tuning with EON tuner). Press “Start training.” The training log will appear on the left, and once it’s complete, the model’s accuracy against a validation set and a confusion matrix will be displayed. In this case, the model achieved 95.6% accuracy across the whole validation set. Some classes are recognized better than others, which is reflected in different accuracies per class in the confusion matrix. This is the first measure of the model performance that can be considered. Already now it’s possible to reason about how to improve the model performance — for example, if one class performs much worse than all the others, it might mean that it’s underrepresented in the training set. In this case, it might be a good idea to collect more samples of this class to balance the dataset better. The next step is model testing. Step 4: Test the Machine Learning Model To test the model, navigate to the “Model testing” tab. It will present a set of samples from the original dataset that was set aside and not involved with the training — this can offer a sense of how the model will perform once deployed. After training the model, new samples can be continuously added to the training set. For instance, if additional experimental runs are conducted on a test device and the data is uploaded directly to the training set, those samples will promptly appear on this page. Subsequently, the trained model can be tested against these newly added samples. Deploying the Impulse — Putting it All to the Edge After the impulse is created and the model performance is satisfactory — it is time to deploy it to the device. There are numerous deployment options, depending on the goal and target. The most flexible one is a C++ library. This option is available in the dropdown menu in the “Deployment” section of the project. After selecting it, clicking “Build” will start the library generation process. The Edge Impulse platform will generate an archive that contains our C++ SDK and configuration of the impulse, including a highly optimized model. This library can be included directly in a firmware project (e.g., Zephyr, plain Linux, FreeRTOS, etc.). The SDK comes in the form of non-compiled source code and is available to reuse and adjust all parts in the firmware project as needed. To make things easy, Edge Impulse offers a set of open-source model firmware projects for several target systems, including but not limited to Linux and Zephyr. Prerequisites The following sections describe how to deploy an impulse to a Linux machine using the example-standalone-inferencing-linux project. This project can be used to compile for any Linux-based target, including a simple computer. The easiest way to follow the steps is from a Linux machine. However, as described above, the impulse can be deployed to numerous other target devices, including MCUs that Zephyr RTOS supports. To test the impulse on such a device (e.g., Nordic nRF5340 DK), use the example-standalone-inferencing-zephyr repository and follow the “Running your impulse locally (Zephyr)” guide in the Edge Impulse documentation portal. The same model files that are acquired from the platform using the “C++ library” deployment option are used to build any target project (i.e., model-parameters, tflite-model, and edge-impulse-sdk folders — described further in this section). EON Compiler The Edge Optimized Neural (EON) compiler is a powerful tool developed by Edge Impulse designed to optimize and effectively run neural networks with reduced RAM and flash usage, all while maintaining accuracy comparable to TensorFlow Lite for Microcontrollers. The EON Compiler incorporates a proprietary compiler that compiles and optimizes neural networks to C++, eliminating complex code, significantly reducing device resource utilization, and saving inference time. Key Benefits of Enabling EON Compiler: - 25–55% less RAM - 35% less flash - Same accuracy as TFLite - Faster inference Download the C++ library and prepare the application First, clone the standalone Linux repository. The application repository contains all the necessary infrastructure to build and run an application on a Linux machine, namely five applications: audio.cpp, camera.cpp, collect.cpp, eim.cpp, and custom.cpp. A more detailed description of each of them can be found in README.md — for now, we will focus on the custom.cpp. This code contains the logic to read an input file — for example containing the raw features as they come from a sensor. This example code is useful to quickly test what kind of results the model will generate on the device for any given input. Next, unzip the archive generated after the build is complete — it should contain the following three folders: **model-parameters, tflite-model, and edge-impulse-sdk**. Place these folders in the root of the repository. The file structure should look the following way: ![Figure 23: Directory structure of the example-standalone repository after adding C++ library acquired from the platform deployment](image) Every time it is necessary to test another model or a new version of the same model downloaded from the platform, delete these three folders and replace them with the new ones. Build and Test the Application After everything is in place, it is time to build the project. There are numerous architectures available to build the application. Detailed build instructions with all the parameters are described in the repository README.md. For standard desktop Linux, a simple command can be used, namely: ``` example-standalone-inferencing-linux on ~/master [!? → APP_CUSTOM=1 make -j `nproc` ``` This will create a binary executable “./build/custom” that contains the necessary parts from our SDK as well as the code for the DSP blocks and the model created in the platform. Additional build options and flags are described in the repository README. To test the inference, we need to have a sample of raw data. Navigate to the project page in the platform, open the page of either of the DSP blocks, and select some sample on the top right, for example, a sample from Class 7. Click the “Copy” icon next to the “Raw features” pane. Next, in the root of the app repository, create a file called “features-class7.txt” and paste the copied 199800 floating point numbers in that file. ![Figure 24: Acquiring raw features of a sample from the “Spectral features” processing block](image) From the root of the repository, run the compiled binary, passing the .txt file as input, as shown below. Now the resulting **pre-processed features array** (outputs of the DSP block) and the **model's confidence for each class**, alongside **time measurements**, are visible for each stage of the impulse: ``` example-standalone-inferencing-linux on / master [?7] via @base + build/custom features-class7.txt Running impulse... Predictions (time: 0 ms): class10: 0.017982 class11: 0.017982 class12: 0.017684 class13: 0.006652 class14: 0.018347 class15: 0.017382 class16: 0.017680 class17: 0.084447 class18: 0.019721 class19: 0.014447 run_classifier returned: 0 (DSP 8 ms., Classification 0 ms., Anomaly 0 ms.) Begin output [0.01798, 0.01798, 0.01766, 0.01768, 0.006652, 0.018349, 0.01738, 0.01768, 0.084447, 0.01972, 0.014447] ``` **Figure 25:** Output of a compiled executable performing inference over the previously extracted raw features **Figure 24** on the previous page illustrates the output of the compiled executable that performs inference over the raw features extracted in the previous step. First, it prints the **processed features array** — the combined output of the “Spectral Features” and “Flatten” blocks. It then shows which **confidence** the model assigns to each of the 11 classes for this given sample. The **highest confidence (0.684)** was assigned to class 7, meaning that the model classified the sample it was provided as **class 7**. This is in line with the real label of the sample we provided it with. The build processes, the toolchains utilized, and the core application logic may vary depending on the hardware target. However, the fundamental concept remains consistent: The dependency-free C++ code exported from the Edge Impulse platform’s project can be easily integrated into any firmware project and called through several API calls. **Automation** Developing ML applications is an iterative procedure. As the experiment matures and more data is collected, keeping track of all the steps necessary to transform the data to the format for use in the current use case can become cumbersome. Edge Impulse provides a capability to build automated **Data Pipelines** — a predefined set of steps that can be triggered based on events (for example, when new data is added to the S3 storage), time period (e.g., once every week), or manually. One can import datasets from existing cloud storage buckets, automate and schedule the imports, label the new data, retrain the model, automatically schedule a deployment task, and many more automation scenarios. **Figure 26** illustrates an example of applying automated pipelines in an end-to-end architecture that can be applied in the development of an ML-enabled industrial appliance. ![Figure 26: An example end-to-end automated ML flow set up in Edge Impulse](image) Below are the instructions to create two pipelines: - One for **fetching, transforming, and importing** the new data from an S3 bucket to a project dataset - And the other to **retrain the model** with the updated dataset and then create a new C++ deployment package **Automating Data Import and Transformation — Organization Data Pipeline** In the “**Data**” section a transformation block was introduced that takes the data in the format of the open sensorless AC motor failures dataset and converts it into JSON format of Edge Impulse. It was used to transform the AC data in the S3 bucket and store it either in an Organization Dataset or directly in the project. Now, it is time to set up a pipeline that will perform the same set of actions but automatically as soon as new data appears in your S3 bucket. **Figure 27** illustrates an example of such a pipeline from an architectural point of view. Step 1 — Copy the Transformation Job as a Pipeline Step Navigate to the “Data Transformation” tab in your organization, select the **Create Job** pane on top, and fill in the parameters of the job. Select the transformation block that was uploaded to the organization earlier, and give the job a name. Now, instead of running this job directly, click “Copy as pipeline step.” This will copy a JSON-encoded job descriptor in the buffer. ![Figure 28: Configuring a transformation job and copying it as a pipeline step](image) Step 2 — Create the Automatic Pipeline Navigate to the “Data Pipelines” tab on the organization page. Press “+ Add new pipeline” and paste the pipeline step from the previous point between the square brackets. Fill in the name and the description of the transform job, select a project that this pipeline will course the data in, and optionally a second dataset in the Edge Impulse organization where the transformed data is stored (called “SILVER dataset in this example”). Set the **interval** at which this pipeline will be automatically re-run (in this example, it is set to 2 days). Press “Add pipeline” — after this, the pipeline will launch for the first time, and the “Active pipelines” progression will be in view. It should be clear when the next time this pipeline will run based on the parameters now specified. Figure 29: Copying the transformation job JSON to a pipeline Figure 30: Configured pipeline view after a successful run Automating Model Retraining and Deployment The pipeline created will ensure that the project dataset is always up to date with the organizational data. The next step is to amend it with another pipeline on the project level. As new data gets added, the pipeline smoothly manages the entire impulse lifecycle from start to finish: - It recreates DSP features for an updated dataset and retrains the model - It retrains and versions the retrained model, keeping track of the model's evolution over time - It also constructs a new C++ library deployment, ensuring that the updated model is efficiently integrated into the target device of choice ![Figure 31: Example architecture of automated Project train and deploy pipeline](image) This approach allows to close the loop and automate the last mile of the end-to-end Edge ML flow — from data ingestion all the way to optimized edge library creation. Follow the steps below to configure this automated data pipeline: Step 1 — Navigate to Data Source Configuration in the Project Click “+ Add new data source” and select “Don’t import data.” The reason for this selection is that the data in the project is already updated by the previous pipeline. ![Figure 32: Configuring a data source for a project](image) Step 2 — Configure Pipeline Steps and Interval Select the project actions that will be automatically performed every time the pipeline is invoked. Configure the pipeline interval. All the parameters can be changed at a later point. Press “Create pipeline.” Collaboration Collaboration and reproducibility are essential pillars of impactful product development and maintenance. **Edge Impulse** offers a unified environment where both embedded and ML teams can work together. It supports the deployment of trained models directly onto resource-constrained embedded devices. To add collaborators, press the icon in the “**Collaborators**” pane on the project dashboard and enter a username or an email address of a person to add. This feature ensures that the ML models are integrated smoothly into the embedded system, eliminating potential integration challenges and fostering collaboration between the two teams. Additionally, by documenting the origin of the dataset, data processing steps, and transformations within your project’s README, Edge Impulse fosters reproducible engineering practices, allowing your peers to replicate and validate your findings. Press “Edit README” in the “About this project” pane on the project dashboard to create a description. Summary Edge Impulse’s components are designed to address the different stages of a typical ML pipeline. The arrangement of these components reflects the common process of data collection, data cleaning/transformation, feature extraction, model training, testing, and deployment. By mirroring the standard stages of the ML pipeline, Edge Impulse ensures users have all the tools they need at each stage. This integrated approach enables teams or anyone, whether an expert or a beginner, to build and deploy ML models seamlessly. The “glue” that connects all these components together in Edge Impulse is its unified software platform. This allows users to manage each stage of the machine learning workflow right from data ingestion to model deployment, making it faster and easier than ever to get to market with Edge AI. All the components are organized in a cohesive manner and tightly integrated into the platform’s user interface. For example, data gathered and processed in one stage of the workflow is seamlessly available for the next, and so on. Additionally, Edge Impulse provides APIs and SDKs for integrating its platform with other tools and systems. This enables users to customize their workflows and use external tools when necessary. The real power of Edge Impulse lies in its modular and flexible architecture. Users can modify and iterate over an impulse design, data, and model as often as needed until achieving a satisfactory result. F.A.Q. What are the minimum hardware requirements to run the Edge Impulse inferencing library on my embedded device? The minimum hardware requirements for the embedded device depend on the use case, anything from a Cortex-M0+ for vibration analysis to Cortex-M4F for audio, Cortex-M7 for image classification to Cortex-A for object detection in video. What frameworks does Edge Impulse use to train the machine learning models? We use a wide variety of tools, depending on the machine learning model. For neural networks, we typically use TensorFlow and Keras. For object detection models we use TensorFlow with Google's Object Detection API, and for 'classic' non-neural network machine learning algorithms, we mainly use sklearn. For neural networks, you can see (and modify) the Keras code by selecting "Switch to expert mode" in the block context menu. Another big part of Edge Impulse is the processing blocks, which can be used for data cleansing or data processing to extract important features before passing it to a machine learning model. The source code for these processing blocks can be found on GitHub: edgeimpulse/processing-blocks (and one can build your own processing blocks as well). What engine does Edge Impulse use to compile the Impulse? It depends on the hardware. For general-purpose MCUs, we typically use EON Compiler with TFLite Micro kernels (including hardware optimization, e.g. via CMSIS-NN, ESP-NN). On Linux, if you run the Impulse on the CPU, we use TensorFlow Lite. For accelerators, we use a wide variety of other runtimes, e.g., hardcoded network in silicon for Syntiant, custom SNN-based inference engine for Brainchip Akida, DRP-AI for Renesas RZV2L, etc. Is there a downside to enabling the EON Compiler? The EON Compiler compiles your neural networks to optimized C++ source code, which is then compiled into your application. This is great if you need the lowest RAM and ROM possible (EON typically uses 30-50% less memory than TensorFlow Lite), but you also lose some flexibility to update your neural networks in the field — as it is now part of your firmware. By disabling EON, we place the full neural network (architecture and weights) into ROM and load it on demand. Can I use a model that has been trained elsewhere in Edge Impulse? Yes. Bringing your own model (BYOM) feature was designed for this. How does the Feature Explorer visualize data that has more than three dimensions? Edge Impulse uses UMAP (a dimensionality reduction algorithm) to project high dimensionality input data into a 3-dimensional space. This even works for extremely high dimensionality data such as images. What is the typical power consumption of the Impulse running on my device? Simple answer: To get an indication of time per inference, we show performance metrics in every DSP and ML block in the Studio. Multiply this by the active power consumption of your MCU to get an indication of power cost per inference. A more complicated answer: It depends. Normal techniques to conserve power still apply to ML, so try to do as little as possible (do you need to classify every second, or can you do it once a minute?), be smart about when to run inference (can there be an external trigger like a motion sensor before you run inference on a camera?), and collect data in a lower power mode (don't run at full speed when sampling low-resolution data, and see if your sensor can use an interrupt to wake your MCU — rather than polling). Also see Analyze Power Consumption in Embedded ML Solutions. What is the .eim model format for Edge Impulse for Linux? See the extensive documentation page on our documentation portal. How is the labeling of the data performed? Using the Edge Impulse Studio data acquisition tools (like the serial daemon or data forwarder), you can collect data samples manually with a predefined label. If you have a dataset that was collected outside of Edge Impulse, you can upload your dataset using the Edge Impulse CLI, data ingestion API, web uploader, enterprise data storage bucket tools or enterprise upload portals. You can then utilize the Edge Impulse Studio to split up your data into labeled chunks, crop your data samples, and more to create high quality machine learning datasets. About the Author Ivan Turasov is a seasoned Edge ML Solutions Engineer at Edge Impulse, where he works with Edge Impulse customers, helping them build ML-enabled products that go to market, ensuring best practices of Edge ML and Edge Impulse are applied. He graduated with a MSc in Embedded Systems from Eindhoven University of Technology in the Netherlands.
{"Source-Url": "https://assets.website-files.com/618cdeef45d18e4ef2fd85f3/65f826d1c27e243a45437112_Edge%20Impulse%20Industrial%20Productivity%20Solutions%20Guide%20v1.0_compressed%20(1).pdf", "len_cl100k_base": 10441, "olmocr-version": "0.1.50", "pdf-total-pages": 45, "total-fallback-pages": 0, "total-input-tokens": 141760, "total-output-tokens": 13753, "length": "2e13", "weborganizer": {"__label__adult": 0.0006656646728515625, "__label__art_design": 0.0013370513916015625, "__label__crime_law": 0.0005421638488769531, "__label__education_jobs": 0.0011615753173828125, "__label__entertainment": 0.0002046823501586914, "__label__fashion_beauty": 0.0004298686981201172, "__label__finance_business": 0.0007677078247070312, "__label__food_dining": 0.0006251335144042969, "__label__games": 0.0012645721435546875, "__label__hardware": 0.0177459716796875, "__label__health": 0.0004379749298095703, "__label__history": 0.00041294097900390625, "__label__home_hobbies": 0.00045013427734375, "__label__industrial": 0.0201568603515625, "__label__literature": 0.0002810955047607422, "__label__politics": 0.0004527568817138672, "__label__religion": 0.001140594482421875, "__label__science_tech": 0.319580078125, "__label__social_life": 0.00015544891357421875, "__label__software": 0.060302734375, "__label__software_dev": 0.56982421875, "__label__sports_fitness": 0.0004992485046386719, "__label__transportation": 0.0012874603271484375, "__label__travel": 0.0002608299255371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52485, 0.02566]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52485, 0.24774]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52485, 0.87321]], "google_gemma-3-12b-it_contains_pii": [[0, 90, false], [90, 478, null], [478, 1209, null], [1209, 3313, null], [3313, 5435, null], [5435, 6166, null], [6166, 7300, null], [7300, 9172, null], [9172, 10093, null], [10093, 12287, null], [12287, 13234, null], [13234, 14246, null], [14246, 14507, null], [14507, 15735, null], [15735, 17724, null], [17724, 19581, null], [19581, 21384, null], [21384, 22302, null], [22302, 24258, null], [24258, 24847, null], [24847, 25891, null], [25891, 26567, null], [26567, 27566, null], [27566, 28829, null], [28829, 29650, null], [29650, 29930, null], [29930, 31513, null], [31513, 32140, null], [32140, 32509, null], [32509, 33140, null], [33140, 35068, null], [35068, 35905, null], [35905, 37114, null], [37114, 40162, null], [40162, 41484, null], [41484, 42393, null], [42393, 43748, null], [43748, 43869, null], [43869, 44840, null], [44840, 45393, null], [45393, 46052, null], [46052, 47864, null], [47864, 49569, null], [49569, 51404, null], [51404, 52485, null]], "google_gemma-3-12b-it_is_public_document": [[0, 90, true], [90, 478, null], [478, 1209, null], [1209, 3313, null], [3313, 5435, null], [5435, 6166, null], [6166, 7300, null], [7300, 9172, null], [9172, 10093, null], [10093, 12287, null], [12287, 13234, null], [13234, 14246, null], [14246, 14507, null], [14507, 15735, null], [15735, 17724, null], [17724, 19581, null], [19581, 21384, null], [21384, 22302, null], [22302, 24258, null], [24258, 24847, null], [24847, 25891, null], [25891, 26567, null], [26567, 27566, null], [27566, 28829, null], [28829, 29650, null], [29650, 29930, null], [29930, 31513, null], [31513, 32140, null], [32140, 32509, null], [32509, 33140, null], [33140, 35068, null], [35068, 35905, null], [35905, 37114, null], [37114, 40162, null], [40162, 41484, null], [41484, 42393, null], [42393, 43748, null], [43748, 43869, null], [43869, 44840, null], [44840, 45393, null], [45393, 46052, null], [46052, 47864, null], [47864, 49569, null], [49569, 51404, null], [51404, 52485, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52485, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52485, null]], "pdf_page_numbers": [[0, 90, 1], [90, 478, 2], [478, 1209, 3], [1209, 3313, 4], [3313, 5435, 5], [5435, 6166, 6], [6166, 7300, 7], [7300, 9172, 8], [9172, 10093, 9], [10093, 12287, 10], [12287, 13234, 11], [13234, 14246, 12], [14246, 14507, 13], [14507, 15735, 14], [15735, 17724, 15], [17724, 19581, 16], [19581, 21384, 17], [21384, 22302, 18], [22302, 24258, 19], [24258, 24847, 20], [24847, 25891, 21], [25891, 26567, 22], [26567, 27566, 23], [27566, 28829, 24], [28829, 29650, 25], [29650, 29930, 26], [29930, 31513, 27], [31513, 32140, 28], [32140, 32509, 29], [32509, 33140, 30], [33140, 35068, 31], [35068, 35905, 32], [35905, 37114, 33], [37114, 40162, 34], [40162, 41484, 35], [41484, 42393, 36], [42393, 43748, 37], [43748, 43869, 38], [43869, 44840, 39], [44840, 45393, 40], [45393, 46052, 41], [46052, 47864, 42], [47864, 49569, 43], [49569, 51404, 44], [51404, 52485, 45]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52485, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
e5e5652545768ab6a3651112c802e130538d8eb5
The Evolution of Eclipse Tom Mens\textsuperscript{1}, Juan Fernández-Ramil\textsuperscript{2,1} and Sylvain Degrandtsart\textsuperscript{1} \textsuperscript{1}Institut d’Informatique, Université de Mons-Hainaut Avenue du Champ de Mars 6, 7000 Mons, Belgium {tom.mens | j.f.ramil}@umh.ac.be \textsuperscript{2}Computing Department, The Open University Walton Hall, Milton Keynes, MK7 6AA, U.K. j.f.ramil@open.ac.uk Abstract We present a metrics-based study of the evolution of Eclipse, an open source integrated development environment, based on data from seven major releases, from releases 1.0 to 3.3. We investigated whether three of the laws of software evolution were supported by the data. We found that Eclipse displayed continual change and growth, hence supporting laws 1 and 6. Six size indicators, out of eight, closely followed trend models. Four were linear and two superlinear. We found evidence of increasing complexity (law 2) in only two indicators, out of five. At subproject level, size and complexity are not distributed uniformly, and subproject size can be modelled as a negative exponential function of the rank position. We encountered a range of different size and complexity trends across subprojects. Our approach and results can help in evaluating the future evolution of Eclipse, the evolution of other systems and in performing comparisons. 1. Introduction Real-world software systems, either proprietary or open source software (OSS), require fixes and enhancements, that is, evolution, since their first release and as long as they are used. The so-called laws of software evolution were proposed by Lehman [7] to provide a better understanding of this phenomenon over a software system’s lifetime, which often spans over several years [10]. The laws originated from studies of proprietary software in the 70s [7]. They have been a topic of research ever since. The laws offer a starting point for a theory of software evolution [8], a basis for justification of software maintenance good practice [9] and have been cited in textbooks (e.g., [13]). Many improvements in software processes and technology have occurred since the 70s, when the laws of software evolution were first proposed. Such advances include object-orientation, iterative and evolutionary processes and OSS. For example, a study of Linux, a popular OSS, by Godfrey and Tu [5] published eight years ago found superlinear growth, in apparent contradiction to some of the laws. It is not sufficiently well known to what extent contemporary OSS evolution follows the laws [4] and further empirical research is needed. In this paper, we present the results of our measurement-based study of the laws of software evolution for the popular Eclipse\textsuperscript{1} open source project. Eclipse is an interesting case because it is a large software system of approximately 2 million lines of code (LOC) at release 3.3 and with a huge user community. At the moment of performing this study, data on its evolution history were available for approximately a six-year time period. The focus of this research was to examine data in order to determine whether Eclipse supports three of the laws of software evolution. The results offer a basis to study future evolutions of Eclipse, and to compare its evolution to other OSS. This article is structured as follows: In Section 2 we briefly present three laws of software evolution for which we study the empirical evidence in Eclipse. Section 3 provides details about the Eclipse system and the data that we have extracted. Section 4 discusses the tools and measurements we used. Section 5 presents the analysis of the results at the global level, considering the Eclipse as a single evolutionary entity. Section 6 covers the analysis of empirical evidence at the level of “namespaces” (subprojects). Section 7 discusses the threats to validity. We compare our results with some earlier studies of other OSS in section 8. Section 9 presents topics of future research in this area. \textsuperscript{1}http://www.eclipse.org/ 2. The laws of software evolution Eclipse seems to match well E-type software’s [7] type. An E-type system solves a problem or addresses an application in a real-world domain. The laws were proposed as a description of the evolution of this type of software. In this paper, we use measurements to characterise the evolution of Eclipse and explore the empirical support for three of the eight laws. Barry et al. [1] classified the laws into three broad groups. Laws 1, 2, 6 and 7 are seen as related to the evolution characteristics of the software. Laws 4 and 5 are linked to organisational and economic constraints. Laws 3 and 8 are seen as meta-laws. Due to the limited effort available for data extraction and analysis, we initially focused on the first subset of laws (1, 2, 6 and 7), the ones related to characteristics of the evolving software. Later in our study, we excluded law 7 because of challenges in data collection, such as the need for an appropriate measurement of the evolving quality of the system as perceived by the users. A recent statement of the three laws that we studied is given below [9]: Law 1: Continuing Change. An E-type system must be continually adapted else it becomes progressively less satisfactory in use. Law 2: Increasing Complexity. As an E-type system is evolved its complexity increases unless work is done to maintain or reduce it. Law 6: Continuing Growth. The functional capability of E-type systems must be continually increased to maintain user satisfaction over the system lifetime. As in other OSS, Eclipse can be studied using various data such as the source code of each release, the bytecode for each supported platform for each release, the defect database Bugzilla\(^2\), version repositories, and mailing lists. In our study, due to effort and time constraints, we focused on the Eclipse source code and the bytecode for Windows. We also considered data from the Bugzilla defect database. Our research question is the following: Having Eclipse’s code repository (source and bytecode) and its Bugzilla reports as data sources, what is the empirical support for laws 1, 2 and 6, by considering Eclipse as a single entity and by looking at its major components? 3. About Eclipse Eclipse is an open source, extensible, integrated development environment (IDE) and also an application framework (i.e., it can be used as a basis for other software systems). Eclipse is written in the popular object-oriented programming language Java. In addition, a small part of the Eclipse implementation requires specific code for each platform (e.g., Windows, Mac OS X, Linux) to improve the interoperability of Eclipse and its performance. We focused on the Eclipse SDK (i.e., Software Development Kit), which includes the Eclipse Platform, Java Development Tools (JDT), and the Plug-in Development Environment (PDE). Based on the data obtained from the Eclipse project website\(^3\) there are about 60 active “committers”. The majority of these contributors seem to be from IBM, the company that initially created the system. At the time of conducting this research, the release history data is available over a period close to six years, from release 1.0 in November 2001 to release 3.3.1 in September 2007. There are several types of releases available\(^4\). In this study we only studied the major releases of Eclipse. They are indicated in boldface in Table 1, together with their release date and the size difference of the downloadable .zip file (in megabytes) with respect to the previous release. We excluded the minor releases from our study since their small size increments suggest only a marginal contribution to the overall Eclipse functionality. A similar conclusion was reached when computing the incremental growth in terms of number of .java files, number of .jar files, number of compiled classes and LOC. <table> <thead> <tr> <th>Table 1. Eclipse public releases</th> </tr> </thead> <tbody> <tr> <td>release</td> </tr> <tr> <td>3.3.1</td> </tr> <tr> <td>3.3</td> </tr> <tr> <td>3.2.2</td> </tr> <tr> <td>3.2.1</td> </tr> <tr> <td>3.2</td> </tr> <tr> <td>3.1.2</td> </tr> <tr> <td>3.1.1</td> </tr> <tr> <td>3.1</td> </tr> <tr> <td>3.0.2</td> </tr> <tr> <td>3.0.1</td> </tr> <tr> <td>3.0</td> </tr> <tr> <td>2.1.3</td> </tr> <tr> <td>2.1.2</td> </tr> <tr> <td>2.1.1</td> </tr> <tr> <td>2.1</td> </tr> <tr> <td>2.0.2</td> </tr> <tr> <td>2.0.1</td> </tr> <tr> <td>2.0</td> </tr> <tr> <td>1.0</td> </tr> </tbody> </table> The Eclipse SDK represents nearly 2 million (more precisely, 1,988,767) LOC at the most recent release (3.3). At \(^2\)https://bugs.eclipse.org/bugs/ \(^3\)http://www.eclipse.org/projects/project_summary.php, consulted on 26 May 2008 to obtain an up-to-date list of active committers to the Eclipse Platform, PDE and JDT, respectively. \(^4\)http://download.eclipse.org/eclipse/downloads/build_types.html release 1.0, Eclipse counted only half a million LOC (to be precise, 506,252), so the size of Eclipse in LOC has increased four times over the considered period. Each release can be downloaded as a single .zip file\(^3\) (e.g., eclipse-SDK-2.1.2-win32.zip) containing code and both user and programmer documentation. For our study we concentrate on the Java source code (in .java files) and the compiled code (in .class files that are bundled in .jar files). For studying the compiled versions of the Eclipse SDK, we used the ones for the Windows 98/ME/2000/XP platform. While the code is an important part of the .zip file, many other files are included for configuration, data storage and documentation. To get an idea, for release 3.0 about 63% of the files were Java code (10,635 .java files out of a total of 16,816 files). 4. Measurements and tools To quantify changes (law 1), we compared the code at consecutive pairs of releases. We used our own Python scripts to count files and file sizes, and to compute differences (additions, deletions, modifications) between releases using various Unix commands (diff, find, grep, sed). We considered five complementary “types” of complexity (law 2) and looked at six of its indicators. These include measurements of code quality provided by STAN, a commercial static code analysis tool [11], and defect data from Eclipse’s defect reports stored in the Bugzilla bug tracking system. Size (law 6) was assessed through eight size measurements at different levels of abstraction and granularity. Measurements of LOC were extracted by STAN. We derived the other size measurements manually by inspecting the Eclipse downloadable files. We used Microsoft Excel for plotting and trend analysis. STAN performs structural analysis on compiled Java code, taking a set of .jar files as input. STAN computes program dependencies, calculates various types of measurements (e.g., size, cyclomatic complexity, coupling), and produces visualisations of dependency graphs at various levels of abstraction. The tool classifies all the measurements into three categories: green (acceptable), yellow (referred to as “warnings”) and red (referred to as “errors”), comparing them to a set of thresholds. These thresholds might be fine-tuned to specific domains or applications. In our study we used the default threshold values. For cyclomatic complexity, these threshold values are ‘$>$ 15’ for yellow and ‘$>$ 20’ for red. One of the complexity indicators that we considered, called quality issues by us, was calculated by adding the number of entities with yellow and red measurements. 5. Results - global view This section presents the results of the analysis of data extracted from the Eclipse SDK when looking at Eclipse globally, that is, as a single evolutionary entity. Unless stated otherwise, from now on in this paper, wherever we mention Eclipse, we refer to the Eclipse SDK. 5.1. Growth - global Since Eclipse follows a well-defined plug-in architecture [3, 12], we started our study by looking at the growth of Eclipse in terms of architectural entities. In particular, we considered three different entities: plug-ins (the components, which are the basic unit of functionality in Eclipse [12], features (a grouping mechanism for plug-ins), and subprojects. Subprojects are “namespaces” that store code based on a ‘prefix’ naming convention, e.g., the jdt subproject is built up from everything recursively contained in org.eclipse.jdt. Fig. 1 shows the growth trends of these architectural units over releases. We observe that the number of plug-ins has increased from 38 to 149 plug-ins (292%). The features and subprojects have increased more slowly than plug-ins, in relative terms (only by 38%). Because the notion of feature was introduced in Eclipse at release 2.0, its trend starts at that release and not at 1.0. At a finer granularity (bytes), Fig. 2 shows the growth trend of Eclipse measured by the size of the .zip files that contain the compiled (byte-code) and source code as released. Fig. 1 displayed the release numbers on the x-axis, whilst Fig. 2 shows the release dates in format mm/dd/yy. Since the major releases of Eclipse have been made available at more or less equally spaced time intervals, both release sequence numbers and real time dates show roughly the same information. Therefore, the remaining plots in this \[^{3}\text{http://archive.eclipse.org/eclipse/downloads}\] paper use the release sequence on the x-axis. It is worth noting that the regularity in the timing of the major releases suggests that the Eclipse development team followed a well-planned release process. ![Figure 2. Size of downloadable .zip files (in megabytes) as a function of release dates.](image) Fig. 2 shows similar growth trends to those of Fig. 2, this time measured as the number of .java files (NOF), number of compiled classes (NOC) and LOC. The growth in estimated LOC was computed by the STAN tool. The average growth has been 260 KLOC per each major release. The relative growth in LOC between releases 1.0 (506 KLOC) and 3.3 (1,988 KLOC) is about 293% and very close to the relative growth in number of plug-ins (292%). Visual inspection of Figures 1 to 3 suggests that continuing growth is a dominant characteristic. In order to confirm this, we calculated the incremental growth between releases and checked whether it was positive, zero or negative. A large portion of positive increments would support the hypothesis that there is increasing growth. In total, out of 45 increments, one was negative (in features, from releases 2.1 to 3.0), six were zero (4 in subprojects and 2 in features) and the rest (38 out of 45, or 84%) were positive, providing support that Eclipse’s evolution has followed law 6 of “continuing growth”. A further question that we explored is what type of trend predominates. In order to do this, we examined the growth trends of the eight size measures presented in figures 1 to 3. Using Excel’s ‘trendline’ function, we fitted its five different models (2nd order polynomial, linear, exponential, logarithmic and power models) to each measurement. Second order polynomials have the property that the sign (positive or negative) of the second order term can indicate whether a trend is superlinear or sublinear [6]. This can give us insights about the evolving complexity as is explained in section 5.3. We used the coefficient of determination $R^2$ as the goodness-of-fit criterion. We fitted the models using the release sequence numbers on the x-axis. When observing the trend line or curve superimposed on the actual data, for $R^2$ values lower than 0.9 the trendline did not appear to be a good fit. For this reason, we only recognised a trend model as a good fit when $R^2$ was equal or greater than 0.9. For two competing models that were a good fit, if the difference in $R^2$ between a linear model and the best model was less than 0.01, we report the trend as linear. This makes sense because, generally, if the $R^2$ difference is so low, the departure from linearity is marginal. Based on this, a trend model was identified in six out of eight indicators. For subprojects and features, the $R^2$ values were lower than 0.9 and, hence, no trend is reported. Within the six identified trends, four of them (NOF, NOC, LOC and size of the compiled code in MBytes) were linear, with $R^2$ ranging from 0.992 to 0.997. For the other two (plug-ins and downloadable source code in MBytes) the best fit were quadratic polynomials (superlinear), with $R^2$ values of 0.9712 and 0.9944, respectively. For these two indicators, exponential models provided slightly lower, but similarly good fits ($R^2$ values of 0.9706 and 0.9895, respectively). Our trend analysis suggests that Eclipse’s evolution not only conforms to law 6 (“continuing growth”), but also that, according to six out of eight growth indicators, the trends were sufficiently disciplined as to follow closely recognisable trend models. ### 5.2. Change - global Fig. 4 presents the number of changes (i.e., additions, modifications and deletions) in .java files between five pairs of releases (2.0-2.1, 2.1-3.0, 3.0-3.1 and 3.1-3.2). It is difficult to apply diff when the same filename occurs more than twice in a pair of releases. Moreover, we could not use the relative path as part of the filename because files can be moved across folders between releases. Therefore, for measurement of the number of modified files, we excluded all files with filenames that appeared more than once in the same release. The percentage of excluded files varies between 18% and 30% of all files, so we make an under- estimation with maximum error of 30%. Fig. 4 shows the number of files with modifications to code only, as well as those files where the comments are also modified. Files with modifications to code only (‘code modified files’ in Fig. 4) represent between 75% and 90% of all modified files. In the figure we observe that release 3.0 represents the largest increment (number of additions) so far in the history of Eclipse. This was accompanied by the largest number of deletions, suggesting that 3.0 was also a restructuring re- lease. Subsequently, the number of modified files reaches its maximum in release 3.1, reflecting fixes and other re- work necessary after release 3.0. Values of modifications after release 3.1 decreased again. ![Figure 4. Added, changed and deleted files.](image) As a summary, we conclude that Eclipse also seems to follow law 1 of “continuing change”. The highest number of added files occurred at release 3.0 (see Fig. 4). The number of modified files is always higher than the number of added files, with a peak of modified files at release 3.1. ### 5.3. Complexity - global Complexity has possibly many dimensions and it is un- likely to find a single generally accepted definition for each of them. In this study, we explored six different ways to as- ssess complexity which correspond to five different “types” of complexity. Some of the presented types may be over- lapping and other types may be defined. The considered types seemed compatible with the data that we were able to extract. Due to lack of space, the statement of each type be- low is short and we cannot explain their assumptions. There is no meaning attached to the numerical order in the listing: 1. **Complexity as size:** given any program, a larger pro- gram is likely to be more difficult to understand or modify (assuming everything else constant). This type of complexity can be described as related to size through a monotonic function (e.g., \( \text{Complexity} = a \times \text{Size} \), where \( a \) is a positive number). Within this view, the observation of increasing size (e.g., in LOC) will support the hypothesis of increasing complexity of type 1. 2. **Complexity as the inverse of productivity:** as the software gets more complex, the implementation of any given enhancement is likely to be more difficult. If effort is constant, growth rate will decrease. One hypothetical example of this would be a soft- ware system that follows the relationship \( \Delta \text{Size} = b / \Delta \text{Complexity} \) where \( b \) is a positive number, re- lated to the level of effort. Under this view, under constant effort, a sublinearly increasing or stagnating \( (\Delta \text{Size} = 0) \) size will support the hypothesis of in- creasing complexity of type 2. 3. **Complexity as the number of possible interconnec- tions:** increasing complexity of this type is likely to lead to an increasing impact of any additions on exist- ing code. Such impact can be measured by the ratio be- tween number of additions and modifications. If type 3 complexity is increasing, this ratio will increase, that is, adding every new entity will trigger the change of an increasing number of existing entities [7]. 4. **Complexity as likelihood to introduce defects:** increasing complexity of this type will manifest itself as an in- creasing number of defects introduced during its evo- lution. The observation of an increasing number of de- fects found, if other factors such as programming and testing effort remain constant, will suggest increasing complexity of type 4. 5. **Complexity as code quality:** an increasing number of quality issues (or their density) identified by static program analysis will suggest increasing complexity of type 5. Quality issues are identified when a set of complexity-related measurements, including cyclo- matic complexity, exceed some threshold. As seen in section 5.1, overall, the size of Eclipse has in- creased over the six years considered. Hence, we can con- clude that Eclipse’s type 1 complexity has been increasing. With regards to complexity of type 2, either a sublinearly growing size or stagnated size will suggest that it is in- creasing. As also seen in section 5.1, six out of eight mea- surements display either linear or superlinear growth with the other two not displaying a consistent trend. Therefore, there is no sufficient evidence that complexity of type 2 has been in- creasing in Eclipse. In order to assess complexity of type 3, one possible measurement is the portion of system handled (PSH) [7]. This metric represents the ratio between the number of modified files and the total number of files (size). Fig. 5 presents this data for Eclipse. PSH decreases its value in 3 out of the 4 release pairs for which modifications were measured. Its overall trend is predominantly decreasing and for this reason PSH provides no consistent support for increasing complexity of type 3. To explore the evidence for increasing complexity of type 4, we extracted the number of defects reported by the Bugzilla defect repository for each of the Eclipse releases. The overall trend is given in Fig. 6. Out of seven releases, only 2 (releases 2.0 and 3.0) show an increase in the number of defects with respect to the previous release. There isn’t a consistent increasing trend in the number of defects and, hence, no indication of increasing complexity of type 4. In summary, only two (complexity as total size and as number of quality issues) out of five complexity indicators provided evidence of increasing complexity in Eclipse. 6. Results - subproject view In this section we analyse the evolution of Eclipse at the level of subprojects. Fig. 8 shows the relative contribution of each subproject to the total size of the system over time. By large, subprojects jdt and ui make up the largest part of Eclipse, accounting for about half of the total size of Eclipse. The figure also reveals that the relative size of the different Eclipse subprojects does not change much over different releases (the lines are roughly parallel). Another observation that can be made in Fig. 8 is that the size distribution of subprojects is not uniform. Some subprojects are much larger than others. For example, in all studied releases the seven largest subprojects (32% of all the subprojects) account for more than 80% of the classes. Fig. 9 displays the subprojects size (release 3.3), ranked in descending order, and using the logarithm of the size in NOC for the y-axis. With regards to law 6 of “continuing growth”, the size of a few subprojects dominates the size trends that we presented in section 5.1 for Eclipse as a single entity. It is interesting to notice that the data in Fig. 9 closely follows a negative exponential model ($R^2 = 0.962$), which corresponds to the linear trendline added to the figure. For the other releases, we fitted a negative exponential models, resulting in $R^2$ values of 0.968, 0.845, 0.856, 0.888, 0.911 and 0.928, for 1.0 to 3.2 respectively. $R^2$ was higher than 0.9 and a reasonably good in three of the cases. It remains an open question to explain why Eclipse’s subproject size behaved as described and, in particular, why a negative exponential model is a good fit in 4 of the 7 studied releases. 6.1. Growth - subprojects As can be seen from Fig. 1, the number of subprojects increased approximately by 38% from 16 subprojects at release 1.0 to 22 in release 3.3. Changes at subproject level are observed in releases 2.0, 3.0 and 3.1 only. In release 2.0, two subprojects were deleted (scripting and webdav), two were newly introduced (platform and tomcat), and one was renamed and reworked significantly (vcm became team). In release 3.0, four new subprojects were introduced (ltk, osgi, search2, text), providing additional evidence that this release was a major restructuring. Finally, release 3.3 added two new subprojects (equinox and jsch). Fig. 10 shows the size of the 15 largest subprojects over releases, measured in LOC. We calculated the relative growth in LOC from release 1.0 to release 3.3. The value was positive in 20 instances, providing evidence for law 6 of “continuing growth” at subproject level. The relative growth could not be calculated for two subprojects that have been introduced at release 3.3 and have a single size measurement. Three other subprojects were present only at release 1.0 and then either removed or became another subproject. We fitted five different trend models (linear, quadratic, exponential, power and logarithmic) to the growth data from 20 subprojects. Each of the other 5 subprojects had one data point only and were excluded from this analysis. We followed the same rules as we did for fitting models to the global growth (section 5.1). We found that, within these 20 subprojects, the growth of 15 subprojects can be modelled with a good fit with $R^2$ values in the range 0.998 to 0.923. Unfortunately we were not able to extract data on the number of modifications for each subproject. This is an item for further work. However, additions to existing code often lead to modifications (e.g., in order to link the new functionality to the existing code in some suitable way). For this reason, the evidence in support of law 6 of “continuing growth” can also be seen as providing indirect support for law 1 of “continuing change”. 6.2. Complexity - subprojects Due to lack of effort for data extraction we could only examine complexity types 1, 2, 4 and 5 (see section 5.3) at the subproject level. The evidence of increasing size in at least 20 out of 25 subprojects (positive relative growth) given in section 6.1 indicates that type 1 complexity has increased in all the 20 subprojects that have been evolved (i.e., present during more than one release). There is evidence that complexity of type 2 has increased for 4 subprojects only (the ones with a sublinear growth trend model). To find out whether defect reports are an indicator of subproject complexity (type 4), we extracted the similar data of Fig. 6 for each subproject. We encountered an additional difficulty that the defect reports produced by Bugzilla cannot be mapped in a straightforward and one-to-one way to our notion of subprojects. In Bugzilla, some of the subprojects are classified as products (in particular equinox, jdt and pde), while most of the other subprojects are classified as components of the Platform product. Although this may affect our results, Fig. 11 seems to indicate the same kind of behaviour for each of the subprojects as we encountered for Eclipse as a whole. Although clearly some subprojects have had more defects than others, we again observe a peak in releases 2.0 and 3.0. For the same reasons given when discussing Fig. 6 in section 5.3, no evidence can be found in Fig. 11 that complexity of type 4 has been increasing. In order to evaluate complexity of type 5, we computed the equivalent of Fig. 7 at subproject level, by counting the number of issues reported by STAN per release for each subproject. It is shown in Fig. 12. For 11 subprojects, the number of quality issues is always increasing over releases. Five subprojects have only one release interval for which the number of quality issues is constant or decreasing, 3 of which occur between releases 3.2 and 3.3, the most recent studied. We fitted models as we did in section 6.1. Increasing trends were identified for 14 subprojects with $R^2$ values in the range from 0.996 to 0.928 (6 were superlinear, 4 linear, and 4 sublinear). In 6 subprojects no trends were identified according to our criterion ($R^2$ was lower than 0.9). The remaining 5 subprojects had only one data point, so no trend could be found. In Fig. 13 we show the distribution of STAN issues across subprojects. As was the case for the size distribution (cf. Fig. 8 and Fig. 9), a negative exponential model best explains the distribution (when compared to linear, quadratic polynomial, power, and logarithmic models). In order to check whether subproject size, class size and the number of quality issues were related across subprojects, we did the following: (i) first, we computed the ratio of LOC against number of classes for each subproject, assuming that a higher average number of LOC per class indicates a higher complexity (consistent with type 1 view); (ii) second, we computed the ratio of STAN issues against number of classes for each subproject (an indicator related to complexity of type 5). In both cases, we counted in how many releases the ratio was above average. We excluded all subprojects that were available in one release only. The results are summarised in Table 2. The values in the tables are fractions $A/B$ where $B$ is the number of releases in which the subproject is present, and $A$ the number of times the value is above average. For example, ant has a value of 2/7 for “issues/NOC”, indicating that the ratio is above average in 2 out of 7 releases. Observe that 2 out of 3 of the biggest subprojects (ui and pde) do not appear on this list because they are never above average. In contrast, three of the smaller subprojects (swt, core and jdi) appear to have a significantly higher relative complexity when looking at Table 2. In future work we intend to investigate further why this is the case. 7. Threats to validity This section lists specific threats to the validity of our results. These are in addition to the more general threats to validity that one may encounter in similar studies [4]. Eclipse is a large system and there are different ways of measuring it. For practical reasons, we focused on the Eclipse SDK for Windows only. This explains why there are differences in measurement values (e.g., in the number of plug-ins) with respect to other studies (e.g., [14] that purely focused on Eclipse’s architecture). Law 2 states that “complexity increases unless work is done to maintain or reduce it”. This means that, for rigorously assessing it, one should also measure the amount of anti-regressive (i.e., complexity control) work [7] (e.g., refactoring which have led to a decrease in complexity). Unfortunately, we could not do this due to the sheer size of Eclipse and our effort limitations. In future work, refactoring identification tools (e.g., [15]) could be helpful. The five types of complexity (section 5.3) involve some assumptions that we were not able to check due to lack of data or effort to extract them. For example, a link between increasing complexity and a sublinearly increasing size trend (type 2) requires that the system have been evolved at a constant level of effort. Changes in effort level could trigger changes in growth and growth rate, even stronger than those potentially related to changes in complexity. Similarly, the use of defect data as an indicator of complexity (type 4) assumes that the testing effort is constant, since higher testing may lead to more defects found, despite lack of any significant changes in complexity. When using STAN to compute metrics for the whole Eclipse SDK, for reasons of computer memory requirements we needed to restrict ourselves to computing the class-level metrics only, thereby excluding all data that could be gathered at the level of methods and fields. In practice, this means that we were not able to compute and study three well-known coupling and cohesion metrics: coupling between object classes (CBO), response for a class (RFC) and lack of cohesion for methods (LCOM) [2]. 8. Related work A partial survey of empirical studies of OSS that are related to the laws is reported in [4]. We briefly discuss below related work that is particularly relevant to our research. Godfrey and Tu [5] studied the growth of Linux, and found superlinearity in its growth (size in LOC). In our study, we also found superlinear growth in Eclipse for size in plug-ins and in MBytes (downloadable source code). As seen in section 5.3, superlinearity does not support an increase of type 3 complexity. However, complexity has other dimensions (cf. section 5.3) and, in general, superlinear growth can be seen as an indicator of increasing type 1 complexity. In particular, for Linux, the frequent addition of device drivers by a large community of contributors can explain the observed increase in growth rate. For the evaluation of complexity, device drivers should be considered “external” to the studied system. Herraiz et al. [6] studied the evolution of 13 OSS projects, finding superlinear growth in 6 of them, linear growth in 4 and sublinear in the remaining 3 systems. This study looked only at the size (in number of files and LOC) at the global level, finding similar results using any of the two measures. In our study we looked at a single system, but using a larger number of metrics and we considered evolution at both the global and subproject level. Xing and Stroulia [15] investigated the structural evolution of Eclipse to find out which of the changes were due to refactorings. Their focus was different than ours, and was oriented towards detailed change information at the level of classes, interfaces, methods and fields. They only looked at the jdt subproject, which accounts for about one third of the total size of Eclipse (see Fig. 8). In addition, only the differences between 3 pairs of Eclipse releases were investigated (namely 2.0-2.1, 2.1.3-3.0, and 3.0.2-3.1). Wermelinger et al. [14] studied the evolution of Eclipse to find out to which extent it complies with architectural design principles that have been argued to impact software maintainability. To this extent, they analysed the Eclipse architecture at the level of plug-ins. Our goal was not to study the architectural evolution of Eclipse, but to assess, more generally, whether Eclipse conforms to three of the laws of software evolution. However, our own approach complements the research of these authors in several ways. First, we relied on different data sources (i.e., source code, compiled code and bug reports) for our analysis. Second, our analysis was performed at a different level of granularity (mainly subprojects, classes and LOC). 9. Conclusions and further work In this paper, we studied data from the popular open source Eclipse IDE, reflecting about six years of its evolution. We examined empirical evidence for three of the laws of software evolution, both at the global and “namespace” (subproject) level. We looked at the behaviour of about 17 indicators for Eclipse as a single entity and 7 or so indicators at subproject level. At the global level, Eclipse’s data supports laws 1 and 6. Law 2 is only partially supported. At subproject level, we observed that only a few of the 25 subprojects concentrate most of the code. A negative exponential model seems to capture well the relationship between size and the size ranking. We identified a variety of size and complexity behaviours at subproject level. Our approach and results can help in the study of Eclipse’s further evolution and in making comparisons to other IDE’s (e.g., NetBeans) and other OSS in general. The study we report in this paper can be extended in different ways, some of which have already been mentioned. One natural extension of this work is to evaluate whether Eclipse’s evolution is consistent with the remaining five laws (laws 3, 4, 5, 7 and 8). For this it will be necessary to extract and analyse data from other data sources such as versioning systems (CVS and Subversion repositories), change request reports and community mailing lists. Another question for further research is how the Eclipse developer community evolves and how it relates to the technical evolution characteristics of Eclipse itself. Currently, there is a lack of a generally accepted set of indicators, measurements and data analysis techniques to evaluate the laws of software evolution. In order to achieve this, further work is needed including comparative analysis of different possible indicators and approaches. Since Eclipse follows a plug-in architecture, it would be helpful to study the evolution of the “external” Eclipse plug-ins (in terms of lifetime, quality, popularity, effort and so on) and how this relates to the evolution of Eclipse itself. In particular, it would be interesting to assess the evolution impact of a plug-in architecture on the quality and evolvability of the “ecosystem” made by the core Eclipse and the many external plug-ins. Finally, further work will be required to establish whether our present findings and approach can be generalised by comparing the evolution of Eclipse to other systems that have similar characteristics (e.g., other OSS, other IDE, other systems that follow a plug-in architecture). Acknowledgements. Drs M. Wermelinger and Y. Yu shared with us their insights about Eclipse. Israel Herraiz brought to our attention the topic of asymmetrical distributions in software size. Yann-Gaël Guéhéneuc provided helpful comments on an early draft of this paper. We thank C. Beck for clarifications about STAN. Support from the F.R.S.-F.N.R.S. through postdoctoral scholarship 2.4519.05 to the research stay of one of the authors (JFR) at UMH is gratefully acknowledged. References
{"Source-Url": "https://oro.open.ac.uk/27316/1/icsm2008manuscript%23RP-25_Mens_T.pdf", "len_cl100k_base": 8844, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 37753, "total-output-tokens": 10564, "length": "2e13", "weborganizer": {"__label__adult": 0.0002732276916503906, "__label__art_design": 0.00030541419982910156, "__label__crime_law": 0.00021708011627197263, "__label__education_jobs": 0.0007004737854003906, "__label__entertainment": 5.179643630981445e-05, "__label__fashion_beauty": 0.000102996826171875, "__label__finance_business": 0.00017023086547851562, "__label__food_dining": 0.00021529197692871096, "__label__games": 0.0004472732543945313, "__label__hardware": 0.0005002021789550781, "__label__health": 0.00021386146545410156, "__label__history": 0.00019752979278564453, "__label__home_hobbies": 4.976987838745117e-05, "__label__industrial": 0.0001544952392578125, "__label__literature": 0.0002225637435913086, "__label__politics": 0.00017976760864257812, "__label__religion": 0.0002627372741699219, "__label__science_tech": 0.004688262939453125, "__label__social_life": 7.295608520507812e-05, "__label__software": 0.007274627685546875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.0001825094223022461, "__label__transportation": 0.0002313852310180664, "__label__travel": 0.00012969970703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42027, 0.0421]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42027, 0.273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42027, 0.93495]], "google_gemma-3-12b-it_contains_pii": [[0, 4070, false], [4070, 9466, null], [9466, 13922, null], [13922, 18031, null], [18031, 22726, null], [22726, 24972, null], [24972, 28630, null], [28630, 31499, null], [31499, 36332, null], [36332, 42027, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4070, true], [4070, 9466, null], [9466, 13922, null], [13922, 18031, null], [18031, 22726, null], [22726, 24972, null], [24972, 28630, null], [28630, 31499, null], [31499, 36332, null], [36332, 42027, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42027, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42027, null]], "pdf_page_numbers": [[0, 4070, 1], [4070, 9466, 2], [9466, 13922, 3], [13922, 18031, 4], [18031, 22726, 5], [22726, 24972, 6], [24972, 28630, 7], [28630, 31499, 8], [31499, 36332, 9], [36332, 42027, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42027, 0.11765]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
584c06bfa55032b4c503a91790577b8e1d2e3d16
An Anatomy of Security Conversations in Stack Overflow Conference or Workshop Item How to cite: For guidance on citations see FAQs. © 2019 IEEE Version: Accepted Manuscript Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. oro.open.ac.uk An Anatomy of Security Conversations in Stack Overflow Tamara Lopez*, Thein Tun*, Arosha Bandara*, Mark Levine†, Bashar Nuseibeh*‡ and Helen Sharp* *School of Computing & Communications, The Open University, Milton Keynes, UK †Department of Psychology, University of Exeter, Exeter, UK ‡Lero - The Irish Software Research Centre, University of Limerick, Limerick, Ireland Email: *firstname.lastname@open.ac.uk, †firstname.lastname@exeter.ac.uk, ‡firstname.lastname@lero.ie Abstract—As software-intensive digital systems become an integral part of modern life, ensuring that these systems are developed to satisfy security and privacy requirements is an increasingly important societal concern. This paper examines how secure coding practice is supported on Stack Overflow. Although there are indications that online environments are not robust or secure, they are used by large numbers of developers. Findings demonstrate that developers use conversation within the site to actively connect with and tend to security problems, fostering knowledge, exchanging information and providing assistance to one another. Index Terms—secure software development, collaborative environments, empirical studies I. INTRODUCTION The pervasive adoption of digital technologies across many aspects of daily life means that software-intensive systems are an integral part of how we live. This has extended the social obligation of governments to provide security for their citizens to include cybersecurity [1], making this a key area of concern for modern society. There is a growing list of cybersecurity tools, guidance, training materials and case studies, yet the number of breaches seems to be continuing. Indeed many recent breaches, such as those experienced by Equifax [2] and Illinois State Board of Elections [3], exploit known vulnerabilities in software systems. So what is going on? Is the problem that developers don’t know enough, or that they don’t care, or that programming languages lack suitable security features? Different theories exist, but in other contexts, developers do ask each other for help and learn from each other such that knowledge grows within the community. Security is, in part, a social phenomenon. Workers bring to the desk a degree of awareness about security formed on the job and in wider engagement in the world [4]. They exhibit a sense of responsibility toward security and their organizations [5]. The ideal within organisations is to achieve a “security culture”, in which behaving securely is an implicit part of behavior [4]. A range of voices and skills contribute to this process: individuals with different levels of commitment to being secure [6], including those that have basic security awareness, and those who are fully committed, security “champions” [7]. These views on security lead to questions about what security is within software development, and in the context of this study, within Stack Overflow. Do developers who participate on the site view security as a duty, or something different? Are values associated with secure coding practice? Is security something to comply with, or to champion? This study looks at how developers talk to each other about security in Stack Overflow, and hence how understanding of security and secure practices is developed and disseminated among practitioners in this online environment. It is part of a larger program of research that is investigating ways to initiate and sustain secure software culture using established frameworks of personal motivation and team culture [8], [9]. Within the program, this is the second study examining Stack Overflow. The first study examined how developers talk to one another in a set of comment streams for questions given the “security” tag in Stack Overflow. The prior analysis suggested that talk about security within Stack Overflow includes information about technical solutions to programming problems, but also statements about personal values and attitudes like responsibility, trust, and fear [10]. This report examines interactions within Stack Overflow accepted answer comment streams for the same set of data. Taking an ethnographic approach [11], the study asks: How do developers on Stack Overflow engage with one another when dealing with issues related to security? II. BACKGROUND Stack Overflow is a question and answer site in which developers can ask questions about programming problems they are solving, and get answers. One of several Q&A sites within the Stack Exchange family, the site was founded in 2008 by Jeff Atwood, who compared the site to other websites that invite public participation, noting that it is a resource “by programmers, for programmers.” [12]. A social learning environment [13], the site is part of a new wave of social media that have given rise to the social programmer [14]. Recent studies within software engineering have examined individual Stack Overflow channels, examining how knowledge is shared and formed within the R channel [15], and finding evidence for differences in use between this environment and other communication channels such as mailing lists [16]. In a qualitative analysis, Nasehi et al. asked what makes a good question [17]. Among on-line sources, Stack Overflow is reported to be the most popular source for learning to code, even among developers who have computer science degrees1. In 2017, 90% of Stack Overflow survey respondents reported finding an answer to solve a coding problem on the site2. In the survey from 2018, almost 60% of respondents identify as back-end developers and 81% of the professional developers have coding as a hobby. Also, 87,450 respondents out of 98,855 are professionals. Roughly two thirds of survey respondents reported that they visit the site at least once a day. However, the numbers of active participants is much smaller: slightly more than half report participating in streams less than once per month or not at all. Characterised within software engineering research as “one-day flies”, possible reasons for the lack of ongoing activity in this user group may be due to the quality of their original post, negative feedback they received in response to it, or efforts to “game” their reputation on the site [18]. However, it is also possible that the database has grown so large that many users are able to find answers to their questions on the site without posting, suggesting that the user community of Stack Overflow contains a number of “legitimate peripheral”, rather than active, participants [19]. A final explanation for peripheral participation may be in the nature of the tasks that developers need to solve. In examining why developers have trouble using cryptography APIs, participants reported that they don’t need cryptography very often in their daily work. They are, instead typical application developers, who only sometimes need cryptography [20, p.938]. Gamification features encourage developers to participate, promising status and recognition within the on-line community, two known motivators of developers in workplace environments [8] and online [21]. Links between helping behaviour and reputation among developers have been established in office-based software development environments [22] and in early investigations examining connections between willingness to contribute in on-line environments and social capital [21]. A number of other workplace motivators that might bring developers to Stack Overflow or drive them to engage have been identified, including a need for social connection, peer interaction, and identification with the task [23]. The Stack Overflow developer survey from 2016 supports these findings. In 2016, 42,134 responses were given to a question about motivation3. Developers indicated that they used the site to get help for tasks on the job (76.0%), but also because they love to learn (61.9%), to give help to others (46.1%) and to communicate with others “like me” (17.9%). Developers generally regard Stack Overflow as useful with answers that are of high quality [24]. However, within usable security research, it has been shown that the code samples taken from posts relating to security may not be as robust or correct as other information sources like books and vendor supplied documentation [25]. 1) Asking and Answering Questions: The Stack Overflow community regulates activity on the site through extensive guideline documents and discussion about expected behaviour within pages dedicated to questions and answers about Stack Overflow operations. A developer who has a question or an answer must read a list of advice before submitting a post. Users are encouraged to improve their posts before submitting, to search within archives before posting, and to be specific and provide details and context that will make answering easier. When asking a question, a developer must accept this advice by clicking a tick-box before seeing a page with a form. The form page offers additional advice in a “How to Ask” sidebar that notes that “we prefer questions that can be answered, not just discussed.” Developers who want to learn more about asking questions can click a link that leads to a longer page with information4. The site also sets guidelines for how participants should behave, urging writers to ask something that is “relevant” to the larger community and asking developers to be open to suggestions or answers that are different than they anticipated5. 2) The Role of Comments: Anyone can ask or answer a question, but to add a comment on a different user’s post, a developer must have 50 reputation points. Comments are limited to 600 characters. They are described as “second-class citizens” to the question and associated answer posts that are the main sources of information on the site6. Comments are intended to be temporary, conceived of as “post-it notes” on the question or answer they support, that can be deleted soon after posting, or by moderators within the site7. In practice, many comments persist for years after they are given. Their management within the site is difficult to contain and regulate. The community debates privileges that are or should be associated with comments, why and how they should be edited, with posts dedicated within the help site to the “bad” habits of people who delete comments8, or who answer questions within comments9. Stack Overflow posts develop over time, and comment streams are a part of this process. Though most edits take place soon after the posts are created, a link has been established between ongoing edits to posts and commenting activity, and between commenting activity and post edits [26]. This relationship between editing and commenting activity suggests that something about the interaction they comprise is valuable. Interactions among Stack Overflow users that take place within comment streams and between posts and commenting streams are the area of investigation for this study. https://research.hackerrank.com/developer-skills/2018/ https://insights.stackoverflow.com/survey/ https://insights.stackoverflow.com/survey/2016#community https://stackoverflow.com/help/how-to-aske https://stackoverflow.com/questions/ask/advice https://meta.stackexchange.com/tags/comments/info https://meta.stackexchange.com/questions/19756/ https://meta.stackexchange.com/questions/19756/ https://meta.stackexchange.com/questions/4217/ III. Method The ethnographic method is used to study peoples’ actions and accounts of actions [11]. The method allows researchers to develop understanding about what practitioners working in socio-technical environments do and why they do it. The analytic stance allows researchers to consider experience from the perspective of the insider, in this case individual users of Stack Overflow. Ethnographic research can be participatory or non-participatory. Non-participatory researchers observe people in settings but do not take part in activities [11]. It is unobtrusive, making it possible to see actions unfold as they do under normal circumstances. Stack Overflow is a naturalistic environment, a place that developers regularly use or consult in daily practice. It is also an environment in which talk is unscheduled [27], that is, not held within formal processes or to meet project constraints. Unscheduled talk is integral to software development. Conversations between developers include stories about past experiences, but also provide narratives in the midst of practice [28] that workers use to develop confidence, and to learn [29]. Through talk, developers generate understanding of what software is and needs to be. This kind of “code talk” is often serendipitous, but lends structure to decisions about programming that will be undertaken at the desk [27]. The key in this study has been to identify features of talk about security that figure into “common-sense knowledge” [29]. To do this, principles of computer mediated discourse analysis [30] were used to isolate and catalogue features that characterise Stack Overflow as a communication medium in which messages are posted, and to examine in more detail the social or situational factors that shape interactions about security. This study intends to strengthen investigations into the social and human aspects of software engineering, asking: **How do developers on Stack Overflow engage with one another when dealing with issues related to security?** This question relates to two research aims that are addressed in this study by examining the nature of interactions between developers. 1) To understand more about the security practices of developers who are not security specialists. 2) To understand what security “is” within the broader Stack Overflow community. IV. Data Selection Stack Overflow encourages participation through features that reward developers with points and badges when their posts are “voted up.” Having a higher reputation grants access to different opportunities for contribution. A search of questions within the meta help site and queries in the data explorer [10] made it apparent that reputation and status are important to developers on the site. For example, the meta help site for Stack Overflow consistently lists “How does reputation work” as one of the most frequently asked questions. The data explorer shows numerous queries that users have posted to find how their reputation compares to others, or how far they are from achieving a higher status. Threads were selected that are perceived within the Stack Overflow community to be valuable, as indicated through scoring features. Data associated with the twenty highest scored questions given the tag “security” were extracted from the hosted version of the Stack Exchange data explorer data dump of 14 January 2018. As reported in the prior study [10], data were selected using the following criteria: 1) Evident need. Top-scored questions and accepted answers were chosen to form the set. Questions indicate evidence of a need to write secure code, but with gaps in knowledge or understanding. 2) Non-specialists. To meet the guiding aim within the overarching project, data were drawn from the general Stack Overflow site rather than the specialised Information Security Stack Exchange site. 3) Stable data. Highest scored postings correspond to the list of Askers given for All Time. These posts are conducive to analysis, as they are less active than recent top rated posts. V. Analysis In the prior study, the set of 20 questions and 137 comments made about those questions were catalogued and given codes reflecting three broad dimensions: security advice and assessment, values and attitudes, and community involvement. Within the current analysis, the set of 20 accepted answers and 364 comments associated with the answers were catalogued to identify in more detail features of participation, and to isolate interactions relating to security. The comments were examined in two phases, described in the sections that follow. A. Phase I Analysis began with cataloguing to mark features of the messaging environment [31]. Commenting in Stack Overflow is asynchronous and so details of the timing of messages in relation to one another were noted, as were indications that messages were deleted, patterns of interaction within and between question and answer streams, naming and addressing techniques and quoting. This analysis also established a broad purpose for the comment, using codes developed within the preliminary study. In addition, the profile pages within Stack Overflow and Stack Exchange for each developer who answered a question and those for a subset of commenters were consulted. A comparison of activity within comment streams for questions and answers revealed three distinct characteristics. In contrast with findings given in the first study [10], interactions that occur within the answer comment streams were found to contain less evidence of proclamations or principles about what should be done in relation to security. and less amplification of risks and fears around security. They include more detailed information about how specific features of languages or tools work. In addition, a higher proportion of commenters within the answer stream address their comments to specific users. Within the answer stream there are also fewer indications within comments of tone or register [31] that are critical, sarcastic, or ironic. Finally, looking at commenting participation across both sets of comments, surprisingly few people were found to be active across both the question and answer streams (see Figure 1). The most likely person to comment in both streams is the Question Asker (see Figure 2). B. Phase II Analysis in phase II was restricted to examination of answer commenting streams. This was done in two parts. First, interactions were identified and catalogued. Next, the internal structure of messages was examined. 1) Interactions: To identify interactions, each comment was examined to identify to which stream the comment was directed and to whom it was addressed. Pairs and exchanges identified were using evidence that users negotiated turn-taking and maintained cross-turn coherence [30] within individual posts. Coherent interactions were indicated when: 1) Participants consistently addressed or quoted each other in their comments 2) Comments persisted: there was no evidence of deletion, and the users who created the comments remain members of Stack Overflow, and 3) Comments were adjacent within streams, posted at close intervals in time to one another, or used public names or quoting after time passed to unambiguously indicate a relationship to a prior comment. Interactions of four kinds were identified within the answer comment streams. 1) Pairs. Most interactions form pairs between two individual users rather than in extended exchanges with one or more users. 2) Three-part exchange. People who left two comments frequently participated in a single three-part exchange with one other person. Exchanges include an initiating comment, a response, and a follow-up comment that confirms understanding, provide thanks, to apologise or retract a statement, or otherwise close the interaction in some way. 3) Multi-part exchange. Often between two people, characteristic exchanges of this type include challenges or a series of questions and responses. 4) Broadcasts. Within broadcasts, multiple developers chime in on a single topic. In both cases within this set, broadcasts are used to situate the security problem within time, indicating how companies handled license key generation in the past (Q8) or noting browser updates over time (Q19). 2) Structure and Purpose: Assigning a single code to indicate purpose or intent to comments is, in many cases, not possible. Within a single comment, developers often convey more than one piece of information. They might offer a suggestion for an alternative solution, while at the same time indicating that they are not confident, and need help. Many comments are similar, and reflect patterns of moves or schema identified in other studies examining asynchronous communication. For example, messages sent to academic mailing lists have been found to commonly follow three moves: a reference is made to an earlier message, a view is expressed, and an appeal is made to other participants [32]. Messages in the set examined here also often have a similar structure. Moves were examined to understand the language developers TABLE I <table> <thead> <tr> <th>Asked</th> <th>Answered</th> <th>Accepted</th> <th>Question Comments</th> <th>Answer Comments</th> <th>Tags</th> </tr> </thead> <tbody> <tr> <td>A2</td> <td>16.1.12</td> <td>16.1.12</td> <td>16.1.12</td> <td>12</td> <td>26 java; string; passwords; char</td> </tr> <tr> <td>A3</td> <td>26.7.11</td> <td>26.7.11</td> <td>27.7.11</td> <td>0</td> <td>19 hash; internals; bcrepyt+j6</td> </tr> <tr> <td>A4</td> <td>2.7.11</td> <td>2.7.11</td> <td>2.7.11</td> <td>1</td> <td>22 authorization; authentication</td> </tr> <tr> <td>A5</td> <td>21.4.11</td> <td>21.4.11</td> <td>31.7.13</td> <td>17</td> <td>14 php; mysql; sql-injection</td> </tr> <tr> <td>A6</td> <td>9.2.11</td> <td>9.2.11</td> <td>10.2.11</td> <td>6</td> <td>13 encryption; hash; cryptography</td> </tr> <tr> <td>A7</td> <td>15.8.10</td> <td>26.8.11</td> <td>17.9.12</td> <td>0</td> <td>12 oauth; access-token; refresh-token</td> </tr> <tr> <td>A8</td> <td>8.6.10</td> <td>16.6.10</td> <td>5.11.10</td> <td>9</td> <td>17 cryptography</td> </tr> <tr> <td>A9</td> <td>19.4.10</td> <td>19.4.10</td> <td>19.4.10</td> <td>5</td> <td>50 javascript; json; ajax</td> </tr> <tr> <td>A10</td> <td>17.2.10</td> <td>28.2.10</td> <td>1.3.10</td> <td>18</td> <td>25 password-encryption; password-storage</td> </tr> <tr> <td>A11</td> <td>4.2.09</td> <td>4.2.09</td> <td>10.9.15</td> <td>0</td> <td>49 windows</td> </tr> <tr> <td>A12</td> <td>30.12.08</td> <td>30.12.08</td> <td>30.12.08</td> <td>7</td> <td>49 php; passwords; hash; protection</td> </tr> <tr> <td>A13</td> <td>1.12.08</td> <td>1.12.08</td> <td>15.3.15</td> <td>9</td> <td>50 validation; sql-injection</td> </tr> <tr> <td>A14</td> <td>9.10.08</td> <td>9.10.08</td> <td>6.6.09</td> <td>1</td> <td>4 post; encryption; https; get</td> </tr> <tr> <td>A15</td> <td>25.9.08</td> <td>25.9.08</td> <td>1</td> <td>17</td> <td>17 php; pdo; sql-injection</td> </tr> <tr> <td>A16</td> <td>24.9.08</td> <td>24.9.08</td> <td>24.9.08</td> <td>9</td> <td>27 php; xsx; sql-injection; user-input</td> </tr> <tr> <td>A17</td> <td>18.9.08</td> <td>18.9.08</td> <td>19.9.08</td> <td>3</td> <td>17 php; database</td> </tr> <tr> <td>A18</td> <td>17.9.08</td> <td>17.9.08</td> <td>17.9.08</td> <td>5</td> <td>15 javascript; performance; eval</td> </tr> <tr> <td>A19</td> <td>28.8.08</td> <td>28.8.08</td> <td>28.8.08</td> <td>6</td> <td>15 browser; autocomplete; passwords</td> </tr> <tr> <td>A20</td> <td>11.8.08</td> <td>11.8.08</td> <td>11.8.08</td> <td>1</td> <td>17 wcf; rest; authorization; rest-security</td> </tr> </tbody> </table> The questions were asked between August, 2008 (Q20) and December, 2012 (Q1). 17 of the questions remained active in 2017 and 2018. All questions except three (Questions 3, 7 and 11) had at least one comment given about the question. Many of the answers were accepted within one month of being given; however, some were not accepted until years after being asked. This discrepancy in dates may reflect that answers can lose or gain accepted status within the community over time. The set includes issues that are several years old. VI. FINDINGS This section provides a structured look at how the community of Stack Overflow operates in answer comment threads that have a security focus, and includes a qualitative examination of how developers within the threads describe security to one another, how they display understanding about security, and the way they apply secure practices to programming tasks. A. Posts The list of 20 accepted answers is summarised in Table I. The table indicates dates for question and answer posts, the date the answer was accepted, and the number of comments for the question and answer streams. The tags are those associated with the question; security was removed. The analysis identified words and punctuation that signal tonal features of messages as well as indications of information trading about security techniques, scenarios, circumstances and principles. Within the paired interactions, the kinds of things developers asked were found to have commonalities with other studies [17], [33]. In general, users asked: - how security concerns relate to individual circumstances - for more information, including different sources - for technical help However, because this analysis focuses on both sides of the interaction, it was also possible to establish how developers respond to questions. Responses fall into the following broad categories: - explanations of how technologies work - establishing security facts - confirming that understanding is correct - assessing how alternative language features, tools or frameworks apply to the security issue under discussion B. Participants As previously reported, twenty different users asked questions. Six of the question askers participated in the question comment stream; a few askers also participated in the answer comment stream. With one exception, these developers do not engage in discussion about the answer, but use comments to give thanks or feedback about the quality of the answer, or to provide detail about technologies or techniques that are in use. The accepted answers were likewise provided by twenty distinct users of Stack Overflow. The answers are highly rated within the site and three have received a bounty, a reputation award given to answers by the askers. Fifteen of these users participate within the comment streams for their answer, but only five commented six or more times. Surprisingly, none of the users who submitted answers comment within the question stream for their own question, though a few answer in streams for other questions (see Table II). Though half of the answer providers are members of the Information Security community, only six have been recently active. Their activity within posts tagged with security varies; four of the answerers appear in the Stack Overflow list of top twenty security question answerers of all time, and several of the users frequently participate in threads tagged with security. Taken together, activity within this group suggests that, as with the askers of questions the developers giving answers... are primarily non-specialists who exhibit a range of levels of activity within the security channel and the Information Security Stack Exchange site. A slightly greater sense of security related activity can be seen by looking at an overview of information for Answer providers drawn from the wider site. Only one answer provider, EpicRainbow, identifies within their profile description as having an interest or expertise in security. However, for half of the answer providers, the top 3 highly voted tags associated with the answerers suggest that other Stack Overflow users recognise and regard participation these developers make in posts that include security as a tag. C. Answer Comment Streams Within the answer streams, 250 Stack Overflow users made 364 comments. The majority of users, 197 left a single comment, 32 left two comments, 10 left three comments, and 11 left more than three comments. Comment streams for questions and answers are distinct. A high proportion of commenters within the answer stream address their comments to specific users, either by referencing the user’s public name, through direct quoting or referencing concepts given within a prior answer. However, many comments are directed toward the writer of the answer post. In these cases, direct addressing is not used, but the comment may include quotes of the answer, or clear references to concepts within the answer. Within the answer stream there are also fewer indications within comments of tone or register [31] that are critical, sarcastic, or ironic. D. A Worked Example Following is a representative set of comments given by three users within the answer stream for Question 5. The commenters are Nemo, Smee and JohnnyGianni. Each of these writers left only one comment in this answer stream. Nemo has participated the least in the security channel, with participation in only 11 question or answer posts. Though Smee is active in the security channel, having participated in 135 posts, there is no indication given within profile information of interest or expertise in security. By contrast, JohnnyGianni is less active in the security channel (41 posts), but more active in the Information Security site and makes reference to security experience within the profile description. Each comment is used to illustrate aspects of interaction across the larger set of comments in the answer streams. In these extracts, different moves [32] are segmented (eg. S1, S2). Information is given in brackets ([ ]) following each segment to indicate a code given during analysis to indicate a purpose for the segment within the comment. Almost a month passes between the first comment (A5.C3) and the second. There is a seven-month gap in time between the second comment (A5.C4) and the response (A5.C5). Answer 5, Comment 3: Nemo S1 vintage [direct address A5] S2 `$iId = mysql_real_escape_string((int)"1; DROP table");` [technique] S3 or `$dirty = "1; DROP table";$iId= mysql_real_escape_string((int)$dirty);` [technique] S4 would be a better example than what you have [view] S5 I think [judgment] S6 Nemo 09/09/2011 05:09 [A5.C3] Often paired interactions in the corpus are initiated in reference to information given in the answer post, as comment Answer 5, Comment three above demonstrates. Nemo is critical of the accepted answer given by Manfred for Question 5 (S5) but only provides an alternative solution within a comment, not within an answer post. This type of answering is a recognised behaviour within the community. The comment given in Answer 5, Comment 4 by Smee is initiated in response to a comment made earlier in the comment stream for Answer 5. Because the original commenter (Nemo) does not respond, the comments A5.C4 and A5.C5 have been treated in analysis as a paired interaction. Answer 5, Comment 4: Smee S7 But this [ref A5.C3] S8 wouldn’t be a real problem, [view] S9 because ‘mysql_query()’ doesn’t execute multiple statements, [proof] S10 no?" [appeal] Smee challenges the alternative example suggested by Nemo, but indicates with the phrasing (S10) and use of a question mark that he is not certain about the proof that is given. The appeal he makes (“no?”) invites a response. Answer 5, Comment 5: JohnnyGianni S12 Smee [direct address] S13 Although the usual example is ‘DROP TABLE’ [ref A5.C3] S14 in practice the attacker is more likely [scenario] S15 to ‘SELECT passwd FROM users’,[technique] S16 the second query is usually executed by use of a ‘UNION’ clause." [technique] S17 JohnnyGianni 21/05/2012 09:47 [A5.C5] Nemo does not reply to Smee. The comment made by JohnnyGianni, given several months later, contains information about how SQL can be applied in a particular kind of security attack. The comment also makes an assessment of the quality of sources of security information that are available. The TABLE II ACCEPTED ANSWER AUTHORS. ASTERISKS (*) INDICATE PARTICIPATION IN A COMMENT STREAM FOR A DIFFERENT QUESTION AND RECENT ACTIVITY IN THE INFO SECURITY SITE. <table> <thead> <tr> <th>Code</th> <th>Pseudonym</th> <th>Answered</th> <th>Q Comment</th> <th>A Comment</th> <th>Info Sec</th> <th>Posts w Security</th> <th>Top Tags for User (by Vote)</th> </tr> </thead> <tbody> <tr> <td>A1</td> <td>ExperiencedPigeon72</td> <td>13.12.12</td> <td>n</td> <td>y</td> <td>y</td> <td>2</td> <td>android; security; reverse-engineering</td> </tr> <tr> <td>A2</td> <td>hercules</td> <td>16.1.12</td> <td>n</td> <td>y</td> <td>n</td> <td>51</td> <td>c#; java; .net</td> </tr> <tr> <td>A3</td> <td>FortuneRat</td> <td>26.7.11</td> <td>n</td> <td>y</td> <td>y*</td> <td>146</td> <td>java; security; encryption</td> </tr> <tr> <td>A4</td> <td>recipegod</td> <td>2.7.11</td> <td>n</td> <td>y</td> <td>y*</td> <td>19</td> <td>c++; c++11; c</td> </tr> <tr> <td>A5</td> <td>vintage</td> <td>21.4.11</td> <td>n</td> <td>n</td> <td>n</td> <td>7</td> <td>php; mysql; sql</td> </tr> <tr> <td>A6</td> <td>EpicRainbow</td> <td>9.2.11</td> <td>n</td> <td>y</td> <td>y</td> <td>143</td> <td>php; security; mysql</td> </tr> <tr> <td>A7</td> <td>Techin</td> <td>26.8.11</td> <td>n</td> <td>n</td> <td>y*</td> <td>2</td> <td>security; refresh-token; access-token</td> </tr> <tr> <td>A8</td> <td>HeroJan</td> <td>16.6.10</td> <td>n*</td> <td>y</td> <td>y*</td> <td>17</td> <td>algorithm; c#; sql</td> </tr> <tr> <td>A9</td> <td>newton</td> <td>19.4.10</td> <td>n</td> <td>n</td> <td>n</td> <td>2</td> <td>javascript; ajax; security</td> </tr> <tr> <td>A10</td> <td>Syntax</td> <td>28.2.10</td> <td>n</td> <td>y</td> <td>y*</td> <td>18</td> <td>c++; c; c#</td> </tr> <tr> <td>A11</td> <td>rabbitsfoot</td> <td>4.2.09</td> <td>n</td> <td>y</td> <td>n</td> <td>4</td> <td>windows; security; c#</td> </tr> <tr> <td>A12</td> <td>Anthropic</td> <td>30.12.08</td> <td>n</td> <td>y</td> <td>n</td> <td>5</td> <td>php; hash; security</td> </tr> <tr> <td>A13</td> <td>mutator</td> <td>1.12.08</td> <td>n</td> <td>y</td> <td>n</td> <td>14</td> <td>c#; .net; wpf</td> </tr> <tr> <td>A14</td> <td>Lemongrass</td> <td>9.10.08</td> <td>n</td> <td>n</td> <td>n</td> <td>18</td> <td>javascript; function; syntax</td> </tr> <tr> <td>A15</td> <td>ColMustard</td> <td>25.9.08</td> <td>n*</td> <td>y</td> <td>y*</td> <td>47</td> <td>c#; .net; sql</td> </tr> <tr> <td>A16</td> <td>darth</td> <td>24.9.08</td> <td>n</td> <td>y</td> <td>n</td> <td>15</td> <td>php; security; sql-injection</td> </tr> <tr> <td>A17</td> <td>whatever</td> <td>18.9.08</td> <td>n</td> <td>y</td> <td>n</td> <td>2</td> <td>algorithm; language-agnostic; mergesort</td> </tr> <tr> <td>A18</td> <td>Einstein</td> <td>17.9.08</td> <td>n</td> <td>y</td> <td>n</td> <td>1</td> <td>javascript; hex; tostring</td> </tr> <tr> <td>A19</td> <td>codfish109</td> <td>28.8.08</td> <td>n</td> <td>n</td> <td>y</td> <td>4</td> <td>c#; .net; datet ime</td> </tr> <tr> <td>A20</td> <td>candyfunctions</td> <td>11.8.08</td> <td>n</td> <td>y</td> <td>y</td> <td>37</td> <td>git; python; c++</td> </tr> </tbody> </table> Comment notes that how attacks are “usually” described is different from the techniques used by attackers with SQL in practice”. Finally, information is included about the structured query language that is phrased in neutral terms. The last line (Segment 16) might be associated with attacking activity, but can also be read as a correction or lesson for Smee about how the structured query language works. E. Developing Awareness and Knowledge Interactions within comment streams for answers in Stack Overflow support the development of security awareness and knowledge in three ways: 1) **Provide focused assistance.** Interactions provide developers with information, clarification or corrections and confirm understanding. Often this kind of support is freely given without indications of judgment or criticism. 2) **Associate technology facts with security problems.** This linking is often material, for example in associating small details about how a language works with an equally small feature of security. Smees is correct, the function doesn’t execute multiple statements, but JohnnyGianni explains that there are other ways to use the query language (S16) that will give similar results. 3) **Situate advice in the security landscape.** Many responses situate the advice given within the larger sphere of security discourse. This is often done with subtle language cues, as in JohnnyGianni’s indication to Smees that the way an attack is usually conveyed does not match what is done in practice. At other times, the alert more directly situates technical information within the broader security landscape, for example by explaining attack scenarios. F. Characteristics of Engagement The worked example shows three comments made for a single answer answer. In this example, the three commenters left only one comment each in this answer stream. While this example is representative of many of the interactions in the set, there are four other characteristics of participation that should be noted. 1) **Security is complex.** Developers indicate that they recognize that security is a complex concern, and one for which information is vague, contradictory, or sparse. As one developer put it, “I’m still learning here. …every time I read something that makes sense, I soon notice 5 other posts that contradict it. that round-and-round gets dizzying quickly :)” (A12.C11) Participants in the examined threads comment and agree on this point. However, they also counter it when they can, by suggesting sources that might be useful. As Anthropic (Question 12) offered in reply to the commenter above, “Absolutely! I’ve just shared what I’ve found. I found a number of things from Shneier on Security and a very long (convoluted) discussion on a news site (don’t remember which now).” (A12.C12) 2) **Support.** There is evidence of support for answers given in the form of verbal comments that indicate things like “Great answer”, “Thanks for the concise answer”, or “I never would have thought of that.”. In the case of the worked answer, Epic Rainbow commented elsewhere in the stream to support the answer given to Answer 5 by Manfred, having perceived that down-voting activity by the community in this case was unfair: “To the people downvoting this answer: this answer is completely correct. This is far more likely to be the reason your use of mysql_real_escape_string is going to be compromised than my answer below. This belongs as the accepted answer (but both can live together)... (A5.C8) 3) Derision. The acceptance of answers is, at times, contentious. Developers note that answers are not correct, address the wrong topic, or are out of date. There are several instances of a user challenging a detail within the accepted answer, and then using the opportunity to draw attention to the answer he or she has written. These reveal workings of Stack Overflow as a community, and demonstrate that users participate for many reasons. In the prior example, EpicRainbow was supportive, however, was dismissive of the answer given to Question 15, commenting “See my answer below for a demonstration and explanation of an attack...” (A15.C5). This kind of community level activity has an impact within the site. The answer analysed for Question 15 has, in the months since data capture in January 2018, lost accepted answer designation (Table I). The comment stream for the answer suggests that the answer was not accepted by community almost from the moment it was posted, with comments like EpicRainbow’s, that use a negative or teasing tone or iconography. 4) Passing time. Answers are changed based on comments made in the answer stream. Generally, users that answer question note this in the body of the answer, using text like “As noted in the comments” or more directly recognising the contributions of particular users: “Edited as per Joe’s astute comment”. The list includes issues that are several years old, making it possible to explore features of community development across a longer span of time. For example, reference is made in one comment stream to the “brand new” Information Security channel. The reason for a bump in activity for another thread is noted to be an early tweet made by one of Stack Overflow’s founders. The threads also give a sense of the changing relevance and importance of particular issues at different points in time. Commenters use streams to note when information is out of date or to broadcast up-to-date information. VII. DISCUSSION Stack Overflow exhibits many attributes of a community of practice [19]. Many of the features of interaction identified in this analysis relate or reflect the Stack Overflow community as a whole. Activity within Stack Overflow centres around the domain of programming, and the collective need developers have to learn. It is fair to say that over the course of a decade, the collective process of asking and answering questions about programming has bound an international community of developers together, and that interactions within the site have an effect on software development practice in offices, schools and other environments around the world. Fig. 3. This image represents commenting activity within answer streams for EpicRainbow, the answer provider for Question 6. EpicRainbow is the green dot in the middle. Within these streams, ER made six comments on four answers depicted in orange, red, green and blue. Where, then, does security sit within Stack Overflow? It is an active topic within the site, but it is difficult to make a case from these findings that the participants in this set form a community around the practice of security. A. Connecting with Security The developers on Stack Overflow have a collective need to learn about security, which must be reconciled with and applied to specific programming tasks. The participation patterns within and between comment streams suggest that security is supported by a network of practice built through personal interactions. It is through these connections that developers guide one another, sharing information and giving help about security. [13]. Activity is not driven by answers given by few participants who are security specialists or who frequently respond to posts across different answer streams. Though there are a few answer providers and commenters like EpicRainbow who contribute more frequently across the set (see figure 3) or Anthropic who clearly exhibits advanced security understanding about cryptography in the post for Answer 12 (see also Table II), there is not the sense that these users alone hold the security channel together. There is significant evidence within the set of ongoing editorial activity. In some circumstances, different users take over editing and updating of answers. This analysis, in-line with other studies [26], indicates that curatorial edits to answer posts are, at times, followed by increased commenting activity. However, the value and vitality of the posts does not come out of community-level activity to improve grammar or to keep links up-to-date. Instead, the significance lies in the impact that interactions within comment streams have on answer posts. Answers are updated to reflect changes in tools or techniques that can be used to address a security problem, but also changes in thinking about what is significant to represent for a given solution. B. Tending to Security Security has been described as a secondary concern to developers, one that is prioritized alongside other tasks developers need to complete [34]. The threads in this analysis, oriented as they often are to language features or use of APIs, support this claim, with some caveats. This analysis demonstrates that the network of developers connecting within Stack Overflow tend to the problem of security within the context of the technical solutions that are given by answer providers. Tending is easiest to see within the commenting patterns of question answerers. The most frequent commenters in the set are users who answered questions (see names in bold within Table II). Their reliable presence makes a difference to the coherence of the thread. It is easier to follow the development of the issue over time when it is anchored by back and forth between an answer provider and different commenters. These developers may also be known and trusted within the network or larger community of Stack Overflow, factors that have been associated with security tool adoption [35]. Tending is also apparent in the links that are established between the security concerns in a question, and the task at hand. Arguments draw in concepts and points made by other commenters, reference other streams, and refer to sources outside of Stack Overflow. These sources include blog posts and news items, but also draw heavily on examples of how security is handled in other technologies and languages. On-line sources of guidance about secure software development have been found to contain gaps in coverage, and developers need to rely on diverse sources of information [36]. Findings in this study suggest that developers are aware of this, but also comfortable drawing upon various sources to build understanding. Research has suggested that “challenge” talk between developers, rather than formal processes or artefacts, is the best way to develop techniques for security among developers [37]. Developers in this set do challenge each other, but do not, in the main, identify themselves in comments as upholding security or protecting code from attackers. Things that an attacker might do with code or in languages are conveyed as part of programming, described in terms of techniques that developers might also use in code and with languages. Given the opportunity, it has been shown that developers turn to Stack Overflow to find solutions to security problems, however the code samples taken from security posts may not be as robust or correct as other information sources like books and vendor supplied documentation [34]. The analysis performed here cannot comment on the quality of the information supplied, however the threads make it clear that developers do not blindly accept the information they are given. Instead, the evidence shows that developers ask for more information, and ask related questions. It is also clear that developers correct each other, explaining how technologies work, but also illuminating security implications in specific situations. Finally, developers respond to one another over long stretches of time. Points made by commenters result in edits to the answers for many years. The breadth of queries and comments over time suggests that developers continue to require and lend support to one another in understanding the significance of security in relation to particular tasks, technologies and tools. VIII. LIMITATIONS There is an inherent bias in approaching security by looking at discussion in a programming environment. The mandate of the site is to help developers solve programming problems, and so discussion naturally centres within the answer streams around technical aspects of software development. There are also other limitations in the sampling process used to gather data and in the quality of the data set used in analysis. The top twenty list has shifted by one since data were collected. Users can change their identity, which makes it hard to link comments to one another. Users can also leave Stack Overflow. In these cases, the comments still appear on the website, but must be explicitly requested in queries for data. Finally, there are gaps in the record, with clear evidence that comments have been deleted. There are also a small number of the answers posted in 2008 or 2009 for which comments are unavailable before dates in 2012. To counter these limitations in the data, analysis has focused on isolating interactions within threads, rather than conceiving of the threads as conversations in their entirety and through careful examination of addressing and quoting techniques and cross-examination of streams to establish relationships between users and identities. IX. CONCLUSIONS Secure coding practice is supported on Stack Overflow through a network of interactions. Users guide and engage with one another through conversations that: - Provide focused, individualized assistance. - Associate technology facts with security problems. - Situate advice in the security landscape. - Broadcast details or facts that orient the security issue in time. Developers actively foster security knowledge on the site by writing and developing answer posts, and by dropping in on comment streams to share information and receive help from one another. The developers who take part in posts see security as something interesting to pursue and to tinker with. As a forum for exchanging information, Stack Overflow is a relevant resource, allowing developers to thoughtfully connect with and tend to security problems. ACKNOWLEDGMENT Supported by the National Cyber Security Centre (NCSC). Nuseibeh thanks SFI, EPSRC and ERC for financial support. REFERENCES
{"Source-Url": "https://oro.open.ac.uk/59243/1/PID5783059-CRC.pdf", "len_cl100k_base": 10952, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33914, "total-output-tokens": 11683, "length": "2e13", "weborganizer": {"__label__adult": 0.00032591819763183594, "__label__art_design": 0.0003407001495361328, "__label__crime_law": 0.0007033348083496094, "__label__education_jobs": 0.002838134765625, "__label__entertainment": 5.948543548583984e-05, "__label__fashion_beauty": 0.0001227855682373047, "__label__finance_business": 0.0003674030303955078, "__label__food_dining": 0.0002263784408569336, "__label__games": 0.0005292892456054688, "__label__hardware": 0.0003905296325683594, "__label__health": 0.0002765655517578125, "__label__history": 0.00016260147094726562, "__label__home_hobbies": 8.83936882019043e-05, "__label__industrial": 0.00017392635345458984, "__label__literature": 0.000263214111328125, "__label__politics": 0.0002541542053222656, "__label__religion": 0.0002579689025878906, "__label__science_tech": 0.006481170654296875, "__label__social_life": 0.00023686885833740232, "__label__software": 0.0156402587890625, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.00019121170043945312, "__label__transportation": 0.00030303001403808594, "__label__travel": 0.00014650821685791016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51033, 0.03734]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51033, 0.4546]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51033, 0.94257]], "google_gemma-3-12b-it_contains_pii": [[0, 712, false], [712, 5853, null], [5853, 12150, null], [12150, 17788, null], [17788, 21273, null], [21273, 27591, null], [27591, 32475, null], [32475, 39056, null], [39056, 44030, null], [44030, 49843, null], [49843, 51033, null]], "google_gemma-3-12b-it_is_public_document": [[0, 712, true], [712, 5853, null], [5853, 12150, null], [12150, 17788, null], [17788, 21273, null], [21273, 27591, null], [27591, 32475, null], [32475, 39056, null], [39056, 44030, null], [44030, 49843, null], [49843, 51033, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51033, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51033, null]], "pdf_page_numbers": [[0, 712, 1], [712, 5853, 2], [5853, 12150, 3], [12150, 17788, 4], [17788, 21273, 5], [21273, 27591, 6], [27591, 32475, 7], [32475, 39056, 8], [39056, 44030, 9], [44030, 49843, 10], [49843, 51033, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51033, 0.18803]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
36adbe44d78f1245e5a2241b0fa0ab445bac871f
A modular foreign function interface Jeremy Yallop, David Sheets and Anil Madhavapeddy University of Cambridge Computer Laboratory first.last@cl.cam.ac.uk Abstract Foreign function interfaces (FFIs) between high-level languages and system libraries typically intertwine the actions of describing the interface of a system library and selecting a binding strategy for linking to it. This tight coupling makes it difficult for programmers to switch between different binding strategies, and discourages the development of new approaches to binding, since more exotic approaches are unlikely to attract sufficient users to justify the cost of development. We present Cmeleon, a replacement for the standard OCaml FFI that exposes typed constructors that correspond to the operations of the type algebra of C, and binding strategies that interpret this type structure as separate program stages. Cmeleon parameterises external calls across binding strategies, isolating interface descriptions from choices relating to call construction (code generation vs dynamic call frames), concurrency style (blocking, cooperatively or preemptively threaded), and separation (in-process, address space or a network connection). This flexibility enables significant code reuse of bindings in many different contexts, from rapid interactive development in a REPL to production deployments with generated code and privilege separation. Cmeleon has been used for the past two years to bind to a broad variety of real-world OCaml libraries, and entirely supplants the need for the low-level C FFI for the vast majority of applications. Categories and Subject Descriptors D3.2 [Language Classifications]: Applicative (functional) languages Keywords functional programming, foreign function interfaces, staged programming 1. Introduction Practical implementations of high-level languages must support interoperability with low-level code, and there is a multitude of approaches available even for the seemingly simple task of gluing together a single pair of languages. This diversity is largely driven by the many contexts in which these high-level programming languages may be used – in a Unix or Windows system with shared libraries, as statically linked and cross-compiled embedded systems, for interactive prototyping in IDEs, or even phone applications. Every mainstream programming language exposes a foreign function interface (FFI) to C that permits calls from the language to the underlying system. Unfortunately, safe use of these FFIs requires the programmer to carefully respect many invariants across invocations, or else risk silent memory corruption. FFIs often feature hundreds of API calls and usage rules that make this difficult to get right manually [17]. A static analysis of Python bindings revealed over 150 errors in a representative set of modules [18], with similar results for OCaml [12] and Java [16]. For instance, consider the relatively simple case of calling the gettimeofday(3) library function from OCaml to retrieve the time. An implementation using the OCaml FFI follows: ```ocaml # include <caml/mlvalues.h> # include <sys/time.h> CAMLprim value caml_gettimeofday (value u) { CAMLparam1 (u); CAMLlocal1 (res); struct timeval tv; if (gettimeofday (&tv, NULL) == 0) unix_uerror ("gettimeofday"); res = Val_long (tv.tv_sec); CAMLreturn (res); } ``` This snippet reveals numerous FFI calls that take care of converting values between OCaml and C value representations (`Val_long`) or ensure that the garbage collector (GC) does not move OCaml values around during the execution of the foreign call (`CAMLparam1`, `CAMLlocal1`, `CAMLreturn`). This FFI style – while prevalent in popular languages – should be discouraged except for experts. Dynamic binding Recognising this, a common alternative approach available in many language implementations such as Python, Ruby, OpenJDK, and the Glasgow Haskell Compiler is to describe the C functions from within the high-level language and use the libffi library to call foreign functions at runtime by constructing stack frames for the library ABI dynamically. Here is a typical example, which uses Python's ctypes library to bind and call the gettimeofday function: ```python libc = ctypes.CDLL("libc.dylib", use_errno=True) tv = struct.tv () libc_gettimeofday (ctypes.pointer (tv), None) ``` The dynamic approach is especially convenient for interactive development, but the difficulty of determining the types of C functions at runtime and the lack of access to compile-time features such as enum constants and macro definitions make it less suitable for use in production systems. There is also a steep performance penalty versus writing C bindings since call frames have to be dynamically constructed (§3). A programmer who wishes to call a C library from their high-level language must currently weigh up the benefits of each approach and commit to one system. A shift in requirements later—for example, relinquishing interactivity for performance—typically requires a rewrite of the bindings. **Abstracting binding strategies** This paper presents Cmeleon, a replacement for the standard OCaml FFI that separates the activity of describing foreign types and functions from the decision of which binding approach to use. Here is a binding to gettimeofday using Cmeleon: ```ocaml module Bindings(F : FOREIGN) = struct open F let gettimeofday = foreign "gettimeofday" (ptr timeval @→ ptr timezone @→ returning int) end ``` We call the reader’s attention to two salient features of this code, leaving the details for later in the paper. First, the binding to gettimeofday is described using high-level functions for constructing representations of C function types (→ and returning) and for resolving external names at particular types (foreign). Second, the binding is parameterised by the interpretation of these functions, i.e. by the module F of type FOREIGN. This separation between description and interpretation is key to our approach, and will allow us to reuse this single foreign interface description with a wide variety of binding strategies, such as static stub generation, dynamic call construction, and several more exotic strategies including inverted (C-to-OCaml) and cross-process calls. **The design of Cmeleon** The remainder of this paper presents the design of Cmeleon, which comprehensively glues together OCaml and C code, using OCaml’s module system to abstract over the details of how exactly the gluing takes place. Cmeleon decomposes foreign function bindings into reusable constituent parts that can be assembled in a variety of ways to balance interactivity, performance and flexibility without the need to rewrite the code that describes a particular foreign library. Cmeleon supports the complete spectrum of C types and is structured as: (i) a common library for describing type structure, with constructors corresponding to the various operations in the type algebra of the foreign language; (ii) various ways to interpret type structure as program stages. Cmeleon provides numerous binding strategies that can interpret the type structure, for: - choosing between interactive development in a REPL (§3.2), and subsequently statically evaluating them into stub code optimised for deployment (§3.4) - interfacing OCaml code to C function calls, or inverting the FFI to permit OCaml libraries to easily expose a C ABI using the same core type definitions (§4.1) - selecting concurrency models for function calls to foreign libraries, such as cooperatively batching requests to one library and launching preemptive threads for those that do not support asynchronous interfaces (§4.2) - enforcing separate address spaces between foreign libraries and the language runtime for privilege separation [23], or using unconventional linking strategies such as unikernels [19] or hardware isolation features [2] (§4.3) The paper is structured as follows. We use inductive data types to represent the types of a foreign language in the host language, starting with object types (§2) and moving on to functions (§3). We then explain how advanced interpretations such as asynchronous FFIs and an inverted FFI from OCaml to C operate in the same modular framework (§4). ``` structure timeval { unsigned long tv_sec; unsigned long tv_usec; } ``` Figure 1: The timeval struct in C ```ocaml module TvTypes(T: TYPE) = struct open T let timeval = structure "timeval" let sec = field timeval "tv_sec" ulong let usec = field timeval "tv_usec" ulong let () = seal timeval end ``` Figure 2: The timeval struct in Cmeleon Describing foreign bindings within a high level language by programming against a typed abstract interface that can be instantiated with different binding strategies offers flexibility, extensibility and type safety that are not possible with an external tool. The ability to move seamlessly and safely between (e.g.) dynamic, staged and out-of-process bindings—and even reuse the same bindings description for inverted calls—without rewriting the binding description has been invaluable in our own uses of Cmeleon in a variety of OCaml projects, and has seen rapid adoption in the wider OCaml community in a number of real-world commercial and free software projects (§5). We conclude by discussing the prior influences on our work (§6) and the broader implications of our approach (§7). 2. Describing C types and values The main purpose of Cmeleon is binding and calling C functions. However, function types are built from value types (called “object types” in C), and calling C functions involves passing and retrieving C values, so we first describe how Cmeleon represents C values and how it determines their layout. 2.1 Describing C types We start by describing the representation of C types, which appear as first-class values in Cmeleon. The binding to gettimeofday in the introduction involves a type timeval. Figures 1 and 2 show the C definition of timeval and the corresponding Cmeleon definition. The binding to the gettimeofday function was parameterised by the definitions of the function-binding operations →, returning and foreign. Similarly, the definition of timeval in Cmeleon is parameterised by the definitions of the type-building operations struct, field and seal. We shall see shortly how this parameterisation supports different strategies for determining object layout, just as the parameterisation in the gettimeofday binding supports different strategies for binding functions. Excluding the parameterisation, the C and Cmeleon definitions correspond line for line. The first line of the Cmeleon code creates a C type which manifests as the type timeval in OCaml. Besides ty and the type-building operations already named there are operations for representing the primitive types void, char and int, an operation module type TYPE = sig type α ty val void : unit ty val char : char ty val int : int ty (* ... etc *) val ptr : α ty -> α ptr ty val structure : string -> σ structure ty val seal : σ structure ty -> unit val field : σ structure ty -> string -> α ty -> (α, σ) field val view : read : (α -> β) -> write : (β -> α) -> α ty -> β ty end Figure 3: A signature for constructing C object types ptr for constructing pointer types, and a additional function view which acts as a kind of map over type representations. In each case the parameter of the result type ty indicates the OCaml type used to read and write values of the underlying C type. For example, values of the C type char appear in OCaml as values of the char type, so the char operation has type char ty. Similarly, C values of type void ** appear in OCaml as values of type (unit ptr) ptr, and so building the corresponding type representation by applying ptr twice to void produces something of type ((unit ptr) ptr) ty. The full open-source implementation also supports the other C primitives and types – arrays, unions, and additional arithmetic types – but we omit the details for brevity. We defer discussion of function pointers to §3.3. A universal view of types The ty constructor may be viewed as a type of codes for C type representations, and the operations of the TYPE interface as code constructors corresponding to each element of the C type algebra. Codes are inductively defined: just as the C type constructor for pointers builds types from types, our ptr builds type representations from type representations. Viewed this way, our approach is reminiscent of the idea of a universe [3, 22] from the dependently-typed programming community, where codes are used to delineate some subset of types of interest – in this case those OCaml types which represent C object types. In a dependently-typed language the codes of a universe come equipped with an interpretation function which maps codes to types, but since OCaml does not support type-level functions we instead index codes by the result of the interpretation – a trick well-known to the generic programming community [8, 28]. 2.2 C types, concretely We have shown how parameterising by the TYPE signature gives us access to the operations we need to construct type representations. In order for us to use those representations to build functions and access C values we need a more concrete representation of types. We now describe a concrete implementation of codes, which will enable us to define functions which work on all values of a C object type. Our concrete representation uses Generalised Algebraic Data Types (GADTs) [9] to precisely capture the relationship between the representation of C types and the OCaml types we use to access C objects. The types of the operations in the TYPE interface of Figure 3 ensures that those operations are used correctly; the constraints represented by GADTs give us additional confidence that they are also implemented correctly. We define our C type representation as one of a mutually-recursive group of four definitions, for representing C types, pointer values, type isomorphisms and structure values. Values of the ctype type represent the C types void, char, or int, pointers to C types, structure types, or type isomorphisms called views, which allow us to give alternative interpretations to a particular representation. For example, we might view a char * as either an OCaml string or as a byte buffer; the underlying C type is the same, but we access objects of the type in different ways. type _ ctyp = Void : unit ctyp | Char : char ctyp | Int : int ctyp | Pointer : α ctyp -> α ptr ctyp | Struct : struct_type -> α structure ctyp | View : (α, β) view -> α ctyp A ptr value stores a typed C pointer object. The reftyp, addr and managed fields respectively store the type of the pointed-to object, the raw C address, and (optionally) an OCaml object to which we can attach finalisers for releasing resources managed by Cmeleon. and α ptr = { reftyp : α ctyp; addr : address; managed : Obj.t option; } The address type is an alias for the OCaml type nativeint that denotes an integer suitably sized for representing machine addresses: type address = nativeint A view has two fields containing functions for converting back and forth between the external type and the underlying representation, plus a third field to hold the viewed type. and (α, β) view = { read : β -> α; write : α -> β; ty : β ctyp } All structure values managed by Cmeleon are heap-allocated and represented directly as pointers. and α structure = { structure : α structure ptr } The structure type is parameterised by a type that is instantiated differently for each separate structure type; it distinguishes incompatible structures in a similar fashion to a struct tag in C. Instantiating the parameter appropriately is left to the user, and is typically accomplished by an ascription, as in the following example: # let t : [ 't ] structure typ = structure "t";; val t : [ 't ] structure typ = struct t There are two further types associated with structures. The first type, struct_type, holds information associated with a C struct type: its tag (e.g. timeval), a flag indicating whether it is complete or incomplete and, if it is complete, its size and alignment requirements. Structures in C can be initially declared as incomplete and completed later, as captured by the mutable fields in our OCaml representation of struct types: type struct_type = { tag : string; mutable complete : bool; mutable size : int; mutable align : int; } The second type, field holds the type, name and offset associated with a struct field: type (α, σ) field = { ftype : α ctyp; fname : string; foffset : int } The two type parameters of field represent the type of the field and the type of the enclosing structure type. The second type parameter is phantom—it does not appear in the definition. Only the type of the field operation in the TYPE interface (Figure 3) ensures that a field is associated with the structure type used to create it. **Operations on types** Cmeleon provides a number of operations on C types, some of which can be defined in pure OCaml. For example, assuming we have information about the storage requirements of primitive types, it is straightforward to define functions that determine the size and alignment of arbitrary types, or that pretty-print types and values. ```ocaml val sizeof : α ctype → int val alignment : α ctype → int val string_of_c_typ : α ctype → string val string_of : σ typ → string ``` Here are sizeof, alignment, string_of_typ and string_of in action at the OCaml top level: ```ocaml # sizeof int - : int = 4 # alignment (ptr int) - : int = 8 # string_of_typ (ptr (ptr int)); - : string = "int**" # string_of (ptr int) (allocate int 10); - : string = "0x1732cd0" ``` Some other Cmeleon functions on ctype involve new primitives. For example, allocate is a typed analogue to malloc, that allocates and initialises a value of a specified type: ```ocaml val allocate : α ctype → α → α ptr ``` The implementation of allocate uses a low-level primitive, raw_allocate, which returns an untyped buffer. The memory associated with a value of the managed_buffer type is freed automatically when the value becomes unreachable from OCaml code: ```ocaml type managed_buffer val raw_allocate : int → managed_buffer ``` **Operations on values** Besides these (and other) operations on types, Cmeleon provides a number of operations on C values. Cmeleon’s interface supports accessing memory at a full range of C types, Cmeleon provides a number of operations on C values. Operations on values that determine the size and alignment of arbitrary types, or that pretty-print types and values. ```ocaml val alloc : bool false val allocate bool false ``` The let p = allocate bool false; ```ocaml # !@p;; val bool : bool typ = int ``` The getf and setf serve a similar function for structure values: ```ocaml val getf : σ structure → (α, σ) field → α val setf : σ structure → (α, σ) field → α → unit ``` If we have a value representing a struct timeval then we can use getf and setf along with the field values tv_sec and tv_usec to read and write its fields: ```ocaml # setf tv_usec (ULONG.of_int 10); - : unit = () # tv; - : [tv ] structure = { tv_sec = 0, tv_usec = 10 } ``` ### 2.3 Determining structure layout We have presented an abstract interface for building C type descriptions (§2.1) and a concrete representation of C types and values (§2.2). What do we gain from separating the abstract interface from the concrete representation rather than programming directly with the latter? In fact, the mapping from type descriptions to type representations is not entirely trivial. The key difficulty is determining the layout of structure fields, which involves several considerations. First, the C standard allows compilers to insert padding bytes between structure fields in order to improve performance. Second, the types and members of structs in C APIs sometimes vary across platforms and between library versions. Third, many compilers can be configured to use alternative algorithms for struct layout: GCC’s __attribute__((packed), which requests that structs be laid out compactly in memory, is a typical example. It is, of course, crucial for a program that accesses C structs to have a view of their layout that matches the actual layout used by the C library. In Cmeleon structs are described using the operations of the TYPE interface. There are several implementations of TYPE, each of which interprets the operations in the interface as functions which determine memory layout details for structs, and then builds a concrete type representation. #### 2.3.1 Computing structure layout As we have said, the C standard allows implementations to insert padding when laying out struct members. In practice this typically means that each field begins on an alignment boundary for the field type, that the end of the struct is padded up to the next alignment boundary, and that the alignment for the struct is taken to be the most stringent alignment requirement of the fields. Our first implementation of the TYPE signature implements the operations which build struct types as functions which follow these rules. The next_offset function computes the next alignment boundary: ```ocaml let next_offset offset alignment = match offset mod alignment with | 0 → offset | overhang → offset + overhang + alignment ``` The structure function builds an incomplete empty struct with no alignment requirements: ```ocaml let structure tag = { tag; size = 0; align = 0; complete = false } ``` The field function computes the alignment and offset for the field and updates the struct alignment and size: We can avoid the drawbacks of the way that the C compiler lays out structs quickly becomes unmanageable. Instead of attempting to replicate the C compiler’s structure layout algorithm we will go directly to the retrieved information directly into the program to build struct representations that are guaranteed to conform to the layout used by the C compiler, even if the order, alignment or number of fields in the OCaml description of timeval differ from the details declared in the C library. Retrieving the layout information from a generated C program rather than attempting to compute it ourselves ensures that the layout used in the program matches the layout used by the C compiler. The full Cmeleon library uses the same approach to offer additional operations for retrieving other static data, such as the values of enum constants or macros. An alternative approach to retrieving structure layout The workflow shown in Figure 4 is not suitable for every situation. In particular, when cross-compiling it may be impractical to run generated C code in the execution environment during the build process. We plan to support an alternative workflow in which the generated C code is linked directly into the program rather than executed to produce an ML module in order to support this case. 3. Describing functions We now turn to the question of binding foreign functions, where a broadly similar approach allows us to separate binding descriptions module type FUNCTION = sig type _ cfn val returning : α ctype → α fn val ( @ → ) : α ctype → β fn → (α → β) fn end module type FOREIGN = sig include FUNCTION type _ res val foreign: string → (α → β) fn → (α → β) res end module Bindings(F : FOREIGN) = struct open F let gettimeofday = foreign "gettimeofday" (ptr timeval @ y → ptr timezone @ x → returning int) end Figure 5: Module types for C function types and foreign bindings, followed by a foreign binding that uses them. from binding strategies. We first introduce codes for function types (§3.1), as we did for object types, and once again index the codes by their interpretation into the OCaml type space. Abstracting over the object type language allowed us to apply different strategies for determining object layout; a similar approach allows us to interpret our function descriptions in a variety of situations, starting with an interpreter for foreign calls (§3.2), which we extend to cover the situation of calling back into OCaml from C (§3.3). We then stage our interpreter to produce a heterogeneous code generator (§3.4) and use function and GADTs to support linking the generated code into our program without compromising type safety. Finally, we bolster our claim to generality by exhibiting a number of alternative interpretations. Starting from the same binding descriptions we interpret the codes to obtain an exporter for building function types; a C function that takes one argument and returns a function can be used for writing each arguments into the appropriate place in a buffer when performing a call. In keeping with the prevailing style in OCaml, we use currying to represent C functions of multiple arguments. However, returning and → carefully distinguish object types and function types; a C function that takes one argument and returns a function pointer that accepts another argument is quite different from a function of two arguments, and our coding represents them differently. It will be useful to have a concrete representation that implements FUNCTION. Translating the types of returning and → into constructor types gives us an inductive data type cfn which can be used to implement the abstract interface FUNCTION: type _ cfn = | Returning : α ctype → α cfn | Function : α ctype → β cfn → (α → β) cfn For object types the question of representation arose twice: we need to represent both the types and the values of C. For functions the situation is simpler. The only operation we need to perform on a C function is invocation, so we can represent C functions directly as OCaml functions. 3.2 Interpreting calls Now that we can represent C function types the next step is to add an interpretation function for binding to C functions. Figure 5 shows the FOREIGN interface which we used in the introduction to build the binding to gettimeofday. The FOREIGN signature extends FUNCTION with an operation foreign for constructing a C function binding from a name and a representation of its type. The return type of foreign uses the abstract parameterised type res (short for result); we are initially interested in situations where α fn becomes α cfn and α res becomes α, so the type of foreign is: val foreign : string → (α → β) fn → (α → β) res That is, foreign turns a C function type description and a name into an OCaml function. In order to build a function of this type we will implement foreign as an interpreter that resolves names and synthesises call frames dynamically. Dynamic name resolution is implemented by the POSIX function dlsym. Call frame synthesis uses the libffi library to handle the low-level details, and we build a typed interface on top of its primitive operations. Call synthesis using the libffi library involves two basic steps. The first, ffi_type, represents C types; we introduce a corresponding OCaml type and expose inhabitants for various primitive types: type ffi_type val int_ffi_type : ffi_type val char_ffi_type : ffi_type val pointer_ffi_type : ffi_type The second libffi type, ffi_cif, describes a call frame structure. We again introduce a corresponding OCaml type callspec and expose primitives for creating a new callspec, for adding arguments to the callspec, and for “sealing” the callspec to mark it as completed and specify the return type: type callspec val alloc_callspec : unit → callspec val add_argument : callspec → ffi_type → int val prepare_call : callspec → ffi_type → unit (The return type of add_argument represents an offset which can be used for writing each arguments into the appropriate place in a buffer when performing a call.) Finally, we need an operation for actually invoking functions. The call function takes a function address, a completed callspec, and two callbacks which write arguments and read the return value from buffers. val call : address → callspec → (address → unit) → (address → α) → α Building a typed interface to these libffi primitives – that is, using them to implement foreign – is straightforward. Each call to foreign uses alloc_callspec to create a fresh callspec: each argument in the function representation results in a call to add_argument with the appropriate ffi_type value. The Returning constructor results in a call to prepare_call; when the arguments of the function are supplied the call function is called to invoke the resolved C function. There is no compilation stage: the user can call foreign interactively (Figure 8a). Here is a simple example, using the isdigit function, which returns non-zero when the argument represents a digit character: # let isdigit = foreign "isdigit" (int @ y → returning int);; val isdigit : int → int = <fun> # isdigit (Char.code '3');; - : int = 2048 # isdigit (Char.code 'x');; - : int = 0 typedef int (*compar_t)(void *, void *); int qsort(void *, size_t, size_t, compar_t) Figure 6: The qsort function let compar_t = funptr (ptr void @→ ptr void @→ returning int) module Bindings(F : FOREIGN) = struct open F let qsort = foreign "qsort" (ptr void @→ size_t @→ size_t @→ compar_t @→ returning void) end Figure 7: Using funptr to bind to qsort 3.3 Interpreting callbacks from C to OCaml The interpreter of §3.2 turns a function name and a function type description into a callable function in two stages: first, it resolves the name into a C function address; next, it builds a call frame from the address and the function type description. In circumstances where we have an address rather than a name available for the function this second stage is useful independently, and so Cmeleon supports it as a separate operation: val function_of_pointer : (α @→ β) cfn @→ unit ptr @→ (α @→ β) Conversions in the other direction are also useful: to pass an OCaml function to C, we must convert it to an address: val pointer_of_function : (α @→ β) cfn @→ (α @→ β) @→ unit ptr The implementation of pointer_of_function is based on the callspec interface that we used to build the call interpreter. We need one just extra primitive operation, which accepts a callspec and an OCaml function, then uses libffi to dynamically construct and return a “trampoline” function which calls back into OCaml: val make_function_pointer : callspec @→ (α @→ β) @→ address Rather than expose the conversions between functions and pointers directly to the user, we build a view that converts between addresses and pointers automatically: let funptr fn = view (ptr void) "read:(function_of_pointer fn) "write:(pointer_of_function fn) val funptr : (α @→ β) cfn @→ (α @→ β) ctype funptr builds object type representations from function type representations, just as function pointers build object types from function types in C. Figure 7 shows funptr in action, describing the callback function for qsort (Figure 6). We can pass OCaml functions to the resulting qsort binding directly: qsort arr nmemb sz (fun l r @→ compare (from_voidp int !@l) (from_voidp int !@r)) (The from_voidp function converts from a void * value to another object pointer type.) This scheme naturally supports even higher-order functions: function pointers which accept function pointer as arguments, and so on, allowing callbacks into OCaml to call back into C. However, such situations appear rare in practice. 3.4 Staging the call interpreter Interpreting function type descriptions as calls is convenient for interactive development, but has a number of drawbacks. First, the implementation suffers from significant interpretive overhead, which we quantify in §5. Second, there is no check that the values we pass between OCaml and C have appropriate types. Our implementation resolves symbols to function addresses at runtime, so there is no checking of calls against the declared types of the functions that are invoked. Finally, we cannot make use of the many conveniences provided by the C language and typical toolchains. When compiling a function call a C compiler performs various promotions and conversions, which are not available in our simple reimplementation of the call logic. By sidestepping the usual symbol resolution process we also lose the ability to use tools like nm and objdump to determine how functions are used. The second of these problems is reminiscent of the difficulties with the function that computes structure layout (§2.3.1), which also suggests the cure. Instead of basing our implementation of foreign on an interpretation of the type provided by the user we will use the type description to generate both C code which can be checked against the API and OCaml code which we will link into the program. The details of the workflow are a little different for binding functions than for retrieving details about object layout: we are dealing with link-time function addresses rather than compile-time struct offsets, so we cannot inline the results into the program. These differences aside, the broad pattern is similar. We first instantiate the Bindings functor (Figure 5) with implementations of FOREIGN that generate code, then link the code into the program with a further instantiation of Bindings (Figure 8b). Let us trace through the details of the staging. The Bindings functor in Figure 5 contains a binding to the gettimeofday function. The first instantiations of Bindings generate C and OCaml code. The generated C code (the gettimeofday_C implementation) converts OCaml representations of values to C representations, calls gettimeofday and transmits the return value representation back from C to OCaml1. If the user-specified type of gettimeofday is incompatible with the type declared in the C API then the C compiler will complain when building the generated source. value cmeleon_gettimeofday(value a, value b) { struct timeval *c = ADDR_OF_PTR(a); struct timezone *d = ADDR_OF_PTR(b); int e = gettimeofday(c, d); return Val_int(e); } 1 There are no calls to protect local variables from the GC because Cmeleon was able to statically determine that the GC cannot run during the execution of this function. The generated OCaml module matches the FOREIGN signature. The central feature is a generated foreign function which scrutinises the type representation passed as argument and extracts raw addresses to pass to C: ```ocaml let foreign : type a. string → a cfn → a = fun name t → match name, t with | "gettimeofday", Function (Pointer _, Function (Pointer _, Returning Int)) → (fun x1 x2 → gettimeofday_C x1.addr x2.addr) ``` Readers familiar with GADTs will recall the type refinement that takes place during the pattern match. Although the result type a is initially abstract, matching on the type representation reveals information about the type, so that the right-hand side of the first case is expected to be a function of type $\sigma$ ptr $\rightarrow$ $\tau$ ptr $\rightarrow$ int for some types $\sigma$ and $\tau$. More precisely, the generated OCaml module has type: ``` FOREIGN with type $\alpha$ fn = $\alpha$ ``` and so passing it as argument to the Bindings functor builds a module containing a callable gettimeofday function. ### 4. Advanced Interpretations We now briefly consider several more exotic interpretations of FOREIGN. We start with an inversion of the model to support exporting a C ABI from an OCaml interface description (§4.1), then describe how to support an cooperative asynchronous monadic interface (§4.2), and how to separate the address spaces of the OCaml and C runtime behind a multi-process interface (§4.3). #### 4.1 An inversion: exporting C ABIs from OCaml code Now that we’ve seen how to invert the call interpreter to support callbacks (§3.3) and how to stage the call interpreter to improve safety and speed (§3.4), the question naturally arises: Is there a use for an inverted, staged interpreter? It turns out that there is. The main use of Cmeleon is making C libraries available to OCaml programs. However, as the discoveries of disastrous bugs in widely-used C libraries continue to accumulate, the need for safer implementations of those libraries written in high-level languages such as OCaml becomes increasingly pressing. As we shall see, Cmeleon supports exposing OCaml code to C via an interpretation of FOREIGN that interprets the parameter of the res type as a value to consume rather than a value to produce. Specialising the res type of the FOREIGN signature (Fig 5) with a type that consumes $\alpha$ values gives the following type for foreign: ```ocaml val foreign : string → (\alpha → $\beta$) fn → ((\alpha → $\beta$) → unit) ``` That is, a function which takes a name and a function description and consumes a function. This is just what we need in order to turn the tables: rather than a function which resolves and binds foreign functions, we now have a function which exports functions under specified names. Continuing our running example, suppose that we want to export a function whose interface matches gettimeofday. Just as before, we can reuse the binding from Figure 5, but this time we will instantiate result to produce a function exporter. As with the structure layout retriever (§2.3.2) and the staged call interpreter (§3.4) we will apply the functor multiple times – first to generate a C header and a corresponding implementation which forwards calls to OCaml callbacks, and then to produce an exporter which connects the C implementation with our OCaml functions. We saw in §2.2 that Cmeleon includes a pretty-printer that formats C type representations using the C declaration syntax. Applying the pretty-printer to the gettimeofday binding produces a declaration suitable for a header: ```ocaml int gettimeofday(struct timeval *, struct timezone *); ``` The generation of the corresponding C implementation proceeds similarly to the staged call interpreter, except that the roles of OCaml and C are reversed: the generated code converts arguments from C to OCaml representations, calls back into OCaml and converts the result back into a C value before returning it. The addresses of the OCaml functions exposed to C are stored in an array in the generated C code. The size of the array is determined by the number of calls to foreign in the functor – one, in this case. Back on the OCaml side we generate code to populate the array when the OCaml module is loaded, and index it by an enumeration data type callback whose type parameter specifies the types of the functions that we will store: ```ocaml type _ callback = Gettimeofday: (address → address → int) callback ``` The generated foreign function pattern matches on the type to produce a function consumer, which passes the consumed function to register_callback: ```ocaml let foreign name t : type a. string → a cfn → (a → unit) = match name, t with | "gettimeofday", Function (Pointer x2, Function (Pointer x4, Returning Int)) → (fun f → register_callback gettimeofday (fun x1 x2 → f {reftyp=timeval; addr=x1; managed=None} {reftyp=timezone; addr=x2; managed=None}) ``` Staged IPC generation ran for 45s per test case to collect earlier (struct is built using the type representation constructors introduced with fields for function identifier, arguments and return value. The on C structs: for each foreign function Cmeleon outputs a struct separate process which contains the C library. Once again, this cross-process approach is straightforward to build from existing components. Our data representation is based on C structs: for each foreign function Cmeleon outputs a struct with fields for function identifier, arguments and return value. The struct is built using the type representation constructors introduced earlier (§2.1) and printed using the generic Cmeleon pretty printer. These structs are then read and written by the generated C code in the two processes. Figure 9 shows the generation of components: besides the C and ML code generated for the staged interpreter, the cross-process interpretation also generates C code that runs in the remote process and a header file to ensure that the two communicants have a consistent view of the frame structs. Perhaps unsurprisingly, these cross-process calls involve some overhead, which we quantify in §5; nevertheless, having the option is very useful in circumstances where it is essential to prevent memory corruption [13]. 5. Evaluation We evaluate Cmeleon both quantitatively via benchmarks of the various backends (§5.1) as well as qualitatively through our experiences with using it over the last two years both for ourselves (§5.2) and through open-source (§5.3). 5.1 Call Latency To evaluate the overhead of Cmeleon, we wrote bindings for ten simple machine integer functions of arity 0 to 9 which return their last argument. Then, we interpreted these bindings both dynamically with libffi (Figure 10a) and statically through a staged compilation (Figure 10b). We wrote two other modules satisfying the same signature with implementations using the traditional manual OCaml binding technique of manipulating OCaml values in C with preprocessor macros. The manual variation followed exactly the FFI directions in Chapter 19 of the OCaml 4.02.1 manual. The expert variation took advantage of various omissions, shortcuts, and undocumented annotations which preserve memory management invariants and are known to be safe but difficult to use correctly. The libffi-interpreted bindings have a large overhead due to writing an ABI-compliant stack frame. Type traversal and directed frame construction for the bound symbol results in a call latency linear in the function’s arity. The static bindings are between 10 and 65 times faster than the dynamic bindings. Figure 10a also shows bindings staged to perform interprocess communication (IPC) via semaphores and shared memory in order to isolate the bound library’s heap from the main program (§4.3). As expected, the IPC introduces a call latency of several microseconds. Each test except staged IPC generation ran for 10s on an Intel Core i7-3520M CPU running at 2.9 GHz under Linux 3.14-2 x86_64. Staged IPC generation ran for 45s per test case to collect sufficient samples for a narrow distribution. All tests had a coefficient of determination, R^2, in excess of 0.98 and 95% confidence intervals of less than ±2%. 5.2 Binding Development Before developing Cmeleon, we found writing OCaml foreign function bindings tedious and punctuated by the frustration and confusion of subtly violated representation invariants [25]. From SSL library has used Cmeleon to quickly and correctly bind bindings staged and written manually. Since then, the modular style has led to various interpretations for publication. Security-critical bindings Recently, SSL library bindings have garnered considerable interest in the systems community. OCaml developers have used manual bindings to OpenSSL from the Liquidsoap project for many years. The ocaml-ssl library was subsequently wrapped in the Lwt cooperative threading monad to be used asynchronously. The binding and the asynchronous wrapper have both been subject to ongoing issues in language runtime handling arising from the manually written C FFI bindings. Recently, the async_ssl library has used Cmeleon to quickly and correctly bind to OpenSSL and directly map that binding into the Async cooperative threading monad. This library is currently in use commercially at Jane Street Capital. One of the key motivations behind the development of the out-of-process interpretation in Cmeleon (§4.3) has been due to a lack of confidence in OpenSSL’s memory safety, which in turn compromises OCaml code that calls into it. Function call interposition Of the five commercial Cmeleon users, Cryptosense SA is likely the most demanding in their combination of interpretations. Cryptosense uses staged bindings, both inverted and forward, with dynamic callbacks to interpose tracing for the PKCS#11 C API for testing the safety of applications that use hardware security modules (HSMs), smartcards, or other cryptography providers. By using Cmeleon, Cryptosense writes their Cryptosense App Tracer product in type-safe OCaml while operating in a C linking environment [6]. High-level, type-safe code can now be used to build very low-level function call interposition that would otherwise be error-prone and difficult to debug. Binary protocol implementation The profuse FUSE protocol library uses Cmeleon solely for its ability to represent the types of binary protocols and perform C structure layout queries. A previous library, ocamlfuse, used manual bindings to libfuse, the FUSE library for userspace file systems. Profuse improves on ocamlfuse by directly communicating with the OS kernel via a UNIX domain socket. This gives profuse the flexibility to stack FUSE file systems and manage asynchrony without incurring the overhead of the full parsing of messages and libfuse-managed asynchrony. This use of Cmeleon’s type representation and layout query features is only possible due to the modular embedding of the C type system. A DSL-based generator would be much harder to repurpose. Unikernel compilation Unikernels are a technique to compile specialised applications that run directly on a hypervisor instead of requiring an intervening guest operating system [19, 20]. The MirageOS unikernel system is written in OCaml, and Cmeleon is used to provide a safe mechanism to link and cross-compile C code into a single-address space Xen virtual machine image. For example, the nocrypto C library provides the cryptographic primitives used by the clean-slate TLS stack that is otherwise written in OCaml. The type safety of a Xen unikernel critically depends on the C trusted computing base being bug-free, and Cmeleon eliminates the need for a significant amount of manually written C FFI code that translates between OCaml and C representations. The bindings currently use the staged C stub generation, but because they are parameterised over interpretations, it is also easily possible to add support for inter-virtual-machine function calls in a similar fashion to inter-process calls within Unix. 6. Influences and related work We have noted various related work during the exposition. Here we list some additional work which has directly influenced the design of Cmeleon. The decision to represent foreign types as first-class values in Cmeleon was inspired by several existing FFIs, including Python’s ctypes, Common Lisp’s Common FFI and Standard ML’s NLFFI [5], each of which also takes this approach. Cmeleon follows NLFFI’s approach in indexing representations of C types and values by host language types in order to ensure internal consistency (although OCaml’s GADTs, unavailable to the author of NLFFI, make it possible to avoid most of the unsafe aspects of the implementation of that library). However, Cmeleon departs from these libraries in abstracting the declaration of C types from the mechanism used to retrieve information about those types, using OCaml’s higher-order module system to perform the abstraction and subsequent selection. Central to Cmeleon is the use of functors to abstract over interpretations of the TYPE and FOREIGN signatures. Carette et al [7] use functors in a similar way, first abstracting over the interpretation of an embedded object language (lambda calculus), then developing a variety of increasingly exotic interpretations which perform partial evaluation, CPS translation and staging of terms. We suggested (§2.1) an analogy between our ty type together with its constructors and the use of universes in the dependently-typed programming community. Altenkirch and McBride [1] use universes directly to represent the types of one programming language (Haskell) within another (OLEG) and then to implement generic functions over the corresponding values. As we have observed (§2.1), mapping codes to types and their interpretations by abstracting over a parameterised type constructor is a well-known technique in the (non-dependently-typed) generic programming community. Hinze [14] describes a library for generic programming in Haskell with a type class that corresponds quite closely to the TYPE signature of §2, except that the types described are Haskell’s, not the types of a foreign language. There is a close connection between Haskell’s type classes and ML’s modules, and so Karvonen’s implementation of Hinze’s approach in ML [15] corresponds even more directly to this aspect of Cmeleon’s design. 7. Discussion and Conclusions The unification of staging with the OCaml foreign function interface has been remarkably successful, with many formerly unstable library bindings now simplified and more reliable and flexible when ported to Cmeleon. The internal complexity of the implementation was well-protected by OCaml’s type system (notably GADTs), and the use of OCaml’s functors to encode program stages scaled extremely well. We have found the higher-order and first-class aspects of OCaml’s module system particularly valuable; although we have not shown the actual applications of the various interpretations, a typical application involves passing a functor containing bindings to a Cmeleon function as a first-class module (package) [11]. Although the representation of C types as first class values has been used in previous work (e.g. [5]), the organisation of binding strategies into a cohesive system of staged fundctors is novel in Cmeleon, and we hope to see it built into other high-level languages in the future by using the abstraction facilities available there. While OCaml’s advanced module system has proved invaluable in the design and implementation of Cmeleon, it is likely that the essential elements of the library can be replicated without too much difficulty in language with support for higher-kindd polymorphism, such as Haskell and Scala, or in untyped languages such as Python and Ruby. A little further afield, Java’s Project Panama is a proposed replacement to the much maligned Java JNI, and could use many of the binding strategies described in this paper. One possible approach is to directly port Cmeleon to Java via the OCamlJava [10] backend. Conversely, many of the software fault isolation strategies proposed in the literature for improving the JNI could also be implemented as Cmeleon stages [26] to improve the performance of our address space separation. Wedge also offers primitives for privilege separation that could be provided by Cmeleon [4], with the additional benefit of not requiring further porting of the bindings. Cmeleon bindings built by users also benefit from the entire range of binding strategies that we have implemented, most notably the ability to hold suspect foreign libraries in a separate address space. Cmeleon guides binding authors to be explicit about memory ownership for this reason, and we plan to extend the typing of the multiprocess interface to effectively expose a capability system with (what amounts to) typed process identifiers. If the user does not require the multiprocess model, then the staging optimises away any overheads. Industrial users of Cmeleon have commented that it entirely supplants the need for them to write manual C bindings, even for high-performance use cases such as cryptography or financial trading strategies, and our experimental results in this paper confirm this. Cmeleon bindings are written at a fairly high level of abstraction. However, there is still sufficient overhead involved in writing out all the definitions necessary for binding to a large API that automating the construction of bindings descriptions is an attractive prospect. We have experimented with using the CIL C parser [21] to import C header files directly into Cmeleon and with interrogating DWARF debug information to extract types from compiled objects. Making these work robustly and expose clean OCaml interfaces is the topic of future work, perhaps based on similar work in this space [24]. Cmeleon is building up momentum in the open-source community, and has been ported beyond Linux to OpenBSD, FreeBSD, MacOS X, Windows and the Android and iPhone mobile phone environments. The existing binding strategies are being extended into more exotic environments such as remoting library calls across virtual machine boundaries for use with unikernels. Type definitions are being written over the base C types to cover language runtimes such as Python, leading to the prospect of safe, well-typed FFIs directly between two host languages without writing a single line of C code. Acknowledgements We thank our colleagues Leo White, Thomas Gazagnaire, Stephen Kell, Mark Florisson, Stephen Dolan and Alan Mycroft for valuable feedback on this paper. Jeremiah Dimino, Mark Shinwell and Yaron Minsky (Jane Street), Thomas Braibant and Graham Steel (Cryptosense), Dave Scott, Rob Hoes and Mike McClurg (Citrix), Török Edwin (Skylatable), Peter Zotov, David Kaloper, Hannes Mehnert and Daniel Bünzli contributed code and design feedback by adopting the library in their projects. A. Hauptmann and Thomas Leonard ported the library to Windows and Xen. A full list of contributors and the Cmleon source code are available at https://github.com/ocamlabs/ocaml-c-types. The research leading to these results received funding from the European Union’s Seventh Framework Programme FP7/2007–2013 under the User Centric Networking project (grant agreement no. 611001) and supported by Horizon Digital Economy Research, RCUK grant EP/G065802/1. References
{"Source-Url": "http://anil.recoil.org/papers/drafts/2015-cmeleon-icfp-draft1.pdf", "len_cl100k_base": 11312, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 43588, "total-output-tokens": 14156, "length": "2e13", "weborganizer": {"__label__adult": 0.0003662109375, "__label__art_design": 0.00031638145446777344, "__label__crime_law": 0.0002524852752685547, "__label__education_jobs": 0.0003273487091064453, "__label__entertainment": 5.155801773071289e-05, "__label__fashion_beauty": 0.0001302957534790039, "__label__finance_business": 0.00015664100646972656, "__label__food_dining": 0.00033926963806152344, "__label__games": 0.0003559589385986328, "__label__hardware": 0.0007104873657226562, "__label__health": 0.0003771781921386719, "__label__history": 0.0002105236053466797, "__label__home_hobbies": 6.884336471557617e-05, "__label__industrial": 0.00031375885009765625, "__label__literature": 0.00022089481353759768, "__label__politics": 0.00026607513427734375, "__label__religion": 0.0004949569702148438, "__label__science_tech": 0.00604248046875, "__label__social_life": 7.128715515136719e-05, "__label__software": 0.0033168792724609375, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002665519714355469, "__label__transportation": 0.0004572868347167969, "__label__travel": 0.00020599365234375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59371, 0.01274]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59371, 0.47898]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59371, 0.85094]], "google_gemma-3-12b-it_contains_pii": [[0, 4819, false], [4819, 10921, null], [10921, 16746, null], [16746, 21803, null], [21803, 23263, null], [23263, 29043, null], [29043, 34329, null], [34329, 39359, null], [39359, 42848, null], [42848, 45643, null], [45643, 53479, null], [53479, 59371, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4819, true], [4819, 10921, null], [10921, 16746, null], [16746, 21803, null], [21803, 23263, null], [23263, 29043, null], [29043, 34329, null], [34329, 39359, null], [39359, 42848, null], [42848, 45643, null], [45643, 53479, null], [53479, 59371, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59371, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59371, null]], "pdf_page_numbers": [[0, 4819, 1], [4819, 10921, 2], [10921, 16746, 3], [16746, 21803, 4], [21803, 23263, 5], [23263, 29043, 6], [29043, 34329, 7], [34329, 39359, 8], [39359, 42848, 9], [42848, 45643, 10], [45643, 53479, 11], [53479, 59371, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59371, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
626787ecefb2f02654e57bf743a2385ad0027eb1
On the “Naturalness” of Buggy Code Ray, Baishakhi; Hellendoorn, Vincent; Godhane, Saheel; Tu, Zhaopeng; Bacchelli, Alberto; Devanbu, Premkumar DOI 10.1145/2884781.2884848 Publication date 2016 Document Version Peer reviewed version Published in Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. On the “Naturalness” of Buggy Code Baishakhi Ray†§, Vincent Hellendoorn†, Alberto Bacchelli†, Saheel Godhane†, Premkumar Devanbu† †University of Virginia rayb@virginia.edu §University of California, Davis {vjhellendoorn,srgodhane,ptdevanbu}@ucdavis.edu tuzhaopeng@gmail.com ‡University of Technology A.Bacchelli@tudelft.nl ABSTRACT Real software, the kind working programmers produce by the kLOC to solve real-world problems, tends to be “natural”, like speech or natural language; it tends to be highly repetitive and predictable. Researchers have captured this naturalness of software through statistical models and used them to good effect in suggestion engines, porting tools, coding standards checkers, and idiom miners. This suggests that code that appears improbable, or surprising, to a good statistical language model is “unnatural” in some sense, and thus possibly suspicious. In this paper, we investigate this hypothesis. We consider a large corpus of bug fix commits (ca. 7,139), from 10 different Java projects, and focus on its language statistics, evaluating the naturalness of buggy code and the corresponding fixes. We find that code with bugs tends to be more entropic (i.e. unnatural), becoming less so as bugs are fixed. Ordering files for inspection by their average entropy yields cost-effectiveness scores comparable to popular defect prediction methods. At a finer granularity, focusing on highly entropic lines is similar in cost-effectiveness to some well-known static bug finders (PMD, FindBugs) and ordering warnings from these bug finders using an entropy measure improves the cost-effectiveness of inspecting code implicated in warnings. This suggests that entropy may be a valid, simple way to complement the effectiveness of PMD or FindBugs, and that search-based bug-fixing methods may benefit from using entropy both for fault-localization and searching for fixes. 1. INTRODUCTION Our work begins with the observation by Hindle et al [22], that “natural” code in repositories is highly repetitive, and that this repetition can be usefully captured by language models originally developed in the field of statistical natural language processing (NLP). Following this work, language models have been used to good effect in code suggestion [22, 48, 53, 15], cross-language porting [38, 37, 39, 24], coding standards [2], idiom mining [3], and code de-obfuscation [47]. Since language models are useful in these tasks, they are capturing some property of how code is supposed to be. This raises an interesting question: What does it mean when a code fragment is considered improbable by these models? Language models assign higher naturalness to code (tokens, syntactic forms, etc.) frequently encountered during training, and lower naturalness to code rarely or never seen. In fact, prior work [7] showed that syntactically incorrect code is flagged as improbable by language models. However, by restricting ourselves to code that occurs in repositories, we still encounter unnatural, yet syntactically correct code; why? We hypothesize that unnatural code is more likely to be wrong, thus, language models actually help zero-in on potentially defective code. This notion appears plausible; highly experienced programmers can often intuitively zero-in on “funny-looking” code, when trying to diagnose a failure. If statistical language models could capture this capability, then they could be a useful adjunct in a variety of settings: they could improve defect prediction; help provide an improved priority ordering for static analysis warnings; improve the performance of fault-localization algorithms; or even recommend “more natural” code to replace buggy code. To investigate this phenomenon, we consider a large corpus of 7,139 bug fix commits from 10 different projects and focus on its language statistics, evaluating the naturalness of defective code and whether fixes increase naturalness. Language models can rate probabilities of linguistic events at any granularity, even at the level of characters. We focus on line-level defect analysis, giving far finer granularity of prediction than typical statistical defect prediction methods, which most often operate at the granularity of files or modules. In fact, this approach is more commensurate with static analysis or static bug-finding tools, which also indicate potential bugs at line-level. For this reason, we also investigate our language model approach in contrast and in conjunction with two well-known static bug finders (namely, PMD [10] and FindBugs [14]). Overall, our results corroborate our initial hypothesis that code with bugs tends to be more unnatural. In particular, the main findings of this paper are: 1. Buggy code is rated as significantly more “unnatural” (improbable) by language models. 2. This unnaturalness drops significantly when buggy code is replaced by fix code. 3. Furthermore, we find that above effects are substantially stronger when: - the buggy code fragment is shorter (fewer lines), and - the bug is “short-lived”, viz. more quickly fixed. 4. Using cost-sensitive measures, inspecting “unnatural” code indicated by language models works quite well: Performance is comparable to that of static bug finders FindBugs and PMD. 5. Ordering warnings produced by the FindBugs and PMD tools, using the “unnaturalness” of associated code, significantly improves the performance of these tools. Our experiments are mostly done with Java projects, but we have strong empirical evidence indicating that the first two findings above generalize to C as well; we hope to confirm the rest in future work. 2. BACKGROUND Our main goal is evaluating the degree to which defective code appears “unnatural” to language models and the extent to which this can enable programmers to zero-in on bugs during inspections. Furthermore, if language models can help pinpoint buggy lines, we want to identify how their performance and applicability relate to commonly used fault-detection methods. To this end, we explore the application of language models, first to file-level defect detection (comparing with statistical defect prediction methods) and then to line-level defect prediction, comparing their performance with popular Static Bug Finders (SBF). In this section, we present the relevant technical background and the main research questions. 2.1 Language Modeling Language models assign a probability to every sequence of words. Given a code token sequence \( S = t_1 t_2 \ldots t_N \), a language model estimates the probability of this sequence occurring as a product of a series of conditional probabilities for each token’s occurrence: \[ P(S) = P(t_1) \cdot \prod_{i=2}^{N} P(t_i|t_1, \ldots, t_{i-1}) \] \( P(t_i|t_1, \ldots, t_{i-1}) \) denotes the chance that the token \( t_i \) follows the previous tokens, the \textit{prefix}, \( h = t_1, \ldots, t_{i-1} \). The probabilities are impractical to estimate, due to the huge number of possible prefixes. A common fix is the \textit{ngram} language model, using the \textit{Markov assumption} to condition just on the preceding \( n \) tokens. \[ P_{ngram}(t_i|h) = P(t_i|t_{i-n+1}, \ldots, t_{i-1}) \] This we estimate from the training corpus as the fraction of times that \( t_i \) follows the prefix \( t_{i-n+1}, \ldots, t_{i-1} \). This is reversible: we can also compute each token given its suffix (the subsequent tokens). We compute entropies based on both prefix and suffix token sequences to better identify the buggy lines (Section 3.3). The \textit{ngram} language models can effectively capture the regularities in source code and have been applied to code suggestion tasks [22, 2]. Tu \textit{et al.} [53] improved such language models by considering that software tends to be repetitive in a local context. They introduced a cache language model (\textit{Sgram}) that deploys an additional cache–list of ngrams extracted from the local context, to capture the local regularities. The ngrams extracted from each file under test form its local context in the cache model. We use the state of the art \textit{Sgram} to judge the “unnaturalness” (measured as cross-entropy) of lines of code. 2.2 Line-level defect detection: SBF Static Bug-Finders (SBF) use syntactic and semantic properties of source code to locate common errors, such as null pointer dereferencing and buffer overflows. They rely on methods ranging from informal heuristic pattern-matching to formal algorithms with proven properties; they typically report warnings at build time. Most of the pattern-matching tools [10, 14, 13, 9] require users to specify the buggy templates. Others [51, 32] can automatically infer rules by mining existing software; they raise warnings if violations of the rules occur. Most (e.g., PMD and FindBugs) are unsound, yet fast and widely used, compared to more formal approaches. Generally, SBF produce false positives and false negatives, which reduce their cost-effectiveness [23, 26]. Both SBF and our model (fairly imperfectly) indicate potential defect locations; our goal is to compare these approaches and see whether they can be combined. 2.3 Evaluating Defect Predictions We take the simplified view that SBF and S\textit{gram} are comparable, in that they both select suspicious lines of code for manual review. We therefore refer to language model based bug prediction as NBF (“Natural Bug Finder”). With either SBF or NBF, human code review effort, spent on lines identified as bug-prone, will hopefully find some defects. Comparing the two approaches requires a performance measure. We adopt a cost-based measure that has become standard: AUCEC (Area Under the Cost-Effectiveness Curve) [4]. Like ROC, AUCEC is a non-parametric measure that does not depend on the defects’ distribution. AUCEC assumes that cost is inspection effort and payoff is the number of bugs found. Given a model (SBF or NBF) that predicts buggy lines, we rank all the lines in decreasing order of their defect-proneness score. Thus, the best possible model would place the buggiest lines at the top of the list. This approach helps reviewers to inspect a smaller portion of code (i.e. cost), while finding a disproportionately larger fraction of defects (i.e. payoff). We normalize both cost and payoff to 100% and visualize the improvement that a prediction model provides when compared against a random guess using a ‘lift-chart’ [55]. In this chart, cost (on the x-axis) refers to the percentage of the code-base inspected at prediction time, and payoff (on the y-axis) indicates the portion of the known bugs (discovered by data gathering) already in the code, that are covered by warned lines. AUCEC is the area under this curve. Under uniform bug distribution across SLOC, inspecting x% of lines of code at random will, in expectation, also yield x% of the bugs, i.e., random selection produces a diagonal line on the lift chart. The corresponding AUCEC when inspecting 5% of lines at random is 0.00125. Inspecting 100% of SLOC in a project is probably unrealistic. Prior research has assumed that 5% (sometimes 20%) of the code could realistically be inspected under deadline [44]. Additionally, Rahman \textit{et al.} compare SBF with \textit{DP} (a file level statistical defect predictor) by allowing the number warnings from SBF to set the inspection budget (denoted AUC\textit{CECL}) [43]. They assign the \textit{DP} the same budget and compare the resulting AUCEC scores. We extend this approach to our comparison of SBF and NBF. To understand how NBF’s payoff varies with cost, we first measure its performance for both 5% and 20% inspection budget. We then compare AUCECs of NBF and SBF at both the 5% budget and under AUC\textit{CECL} budget. Finally, we investigate defect prediction performance under several credit criteria. A prediction model is awarded credit, ranging from 0 to 1, for each \textit{ipso facto} eventually buggy line flagged as suspicious. Previous work by Rahman \textit{et al.} has compared SBF and \textit{DP} models using two types of credit: full (or optimistic) and partial (or scaled) credit [43], which we adapt to line level defect prediction. The former metric awards a model one credit point for each bug iff at least one line of the bug was marked buggy by the model. Thus, it assumes that a programmer will spot a bug as soon as one of its lines is identified as such. Partial credit is more conservative: for each bug, the credit is awarded in proportion to the fraction of the bug’s defective lines that the model marked. \textsuperscript{1}Calculated as 0.5 * 0.05 * 0.05. This could be normalized differently, but we consistently use this measurement, so our comparisons work. partial credit assumes that the probability of a developer finding a bug is proportional to the portion of the bug that is marked by the model. It should be noted the AUCEC is non-parametric under partial credit, but not under full credit, as it depends on the defect distribution; however we get the same overall result under both regimes. 2.4 Research Questions Our central question is whether "unnaturalness" (measured as entropy, or improbability) indicates poor code quality. The abundant history of changes (including bug fixes) in OSS projects allows the use of standard methods [50] to find code that was implicated in bug fixes ("buggy code"). \[ \begin{align*} \text{RQ1.} & \text{ Are buggy lines less "natural" than non-buggy lines?} \\ \text{RQ2.} & \text{ Are buggy lines less "natural" than bug-fix lines?} \\ \text{RQ3.} & \text{ Is "naturalness" a good way to direct inspection effort?} \\ \text{RQ4.} & \text{ How do SBF and NBF compare in terms of ability to direct inspection effort?} \\ \text{RQ5.} & \text{ Is "naturalness" a useful way to focus the inspection effort on warnings produced by SBF?} \end{align*} \] Finally, if SBF provides a warning on a line and it appears unnatural to a language model, we may expect that this line is even more likely a mistake. We therefore investigate whether naturalness is a good ordering for warnings provided by static bug-finders. 3. METHODOLOGY We now describe the projects we studied, and how we gathered and analyzed the data. 3.1 Study Subject We studied 10 OSS Java projects, as shown in Table 1: among these are five projects from Github, while the others are from the Apache Software Foundation. We chose the projects from different domains to measure NBF’s performance in various types of systems. All projects are under active development. We analyzed NBF’s performance in two settings: Phase-I considers NBF’s ability to find bugs, based on continuous usage during active development (see Section 3.2). We chose to analyze each project for the period of one-year which contained the most bug-fixes in that project’s history; here, we considered both development-time and post-release bugs. Then, for the chosen one-year duration, we extracted snapshots at 1-month intervals. A snapshot captures the state of the project at a given point in time. Thus, for each project we studied 12 snapshots, in total analyzing 120 snapshots across 10 projects, including 113,762 distinct file versions, and 35.3 Million total non-commented source code lines (NCSL). Overall, we studied 7,139 distinct bug-fix commits comprising of 2.2 Million total buggy lines. Subsequently, we confirmed our results across the entire history of each studied project, using snapshots at 6-month intervals (157 snapshots, 63,301 commits, 23,664 bug-fixes). Due to page limitations, we are presenting results from our study of 1-month snapshots only. Next, for Phase-II (see Section 3.2), we focused only on post-release bugs to evaluate NBF’s performance as a release-time bug prediction tool. We used the data set from Rahman et al. [43], in which snapshots of the five Apache projects were taken at se- <table> <thead> <tr> <th>Ecosystem</th> <th>Project</th> <th>Description</th> <th>Study Period</th> <th>#Files</th> <th>NCSL</th> <th>#Unique bug-fixes</th> </tr> </thead> <tbody> <tr> <td>Github</td> <td>Apache</td> <td>Web socket framework</td> <td>Oct-11 to Oct-12</td> <td>4,073</td> <td>427,901</td> <td>664</td> </tr> <tr> <td></td> <td>Elasticsearch</td> <td>Distributed search engine</td> <td>Jul-14 to Jul-15</td> <td>30,977</td> <td>5,962,716</td> <td>498</td> </tr> <tr> <td></td> <td>Facebook-android-sdk</td> <td>Android SDK for Facebook</td> <td>Dec-11 to Dec-12</td> <td>1,792</td> <td>172,695</td> <td>77</td> </tr> <tr> <td></td> <td>Netty</td> <td>Network application framework</td> <td>Aug-12 to Aug-13</td> <td>8,618</td> <td>1,078,493</td> <td>530</td> </tr> <tr> <td></td> <td>Presto</td> <td>SQL query engine</td> <td>Jul-14 to Jul-15</td> <td>10,769</td> <td>2,869,799</td> <td>346</td> </tr> <tr> <td></td> <td>SBF</td> <td>Relational database</td> <td>Jul-06 to Jul-07</td> <td>7,332</td> <td>6,053,966</td> <td>1,352</td> </tr> <tr> <td></td> <td>NBF</td> <td>Text search engine library</td> <td>Jan-12 to Jan-13</td> <td>29,870</td> <td>7,172,714</td> <td>1,639</td> </tr> <tr> <td></td> <td>OpenPA</td> <td>Java Persistence API</td> <td>Jul-09 to Jul-10</td> <td>3,849</td> <td>4,869,620</td> <td>567</td> </tr> <tr> <td></td> <td>Qpid</td> <td>Messaging system</td> <td>Apr-14 to Apr-15</td> <td>7,350</td> <td>4,665,159</td> <td>277</td> </tr> <tr> <td></td> <td>Wicket</td> <td>Web framework</td> <td>Jul-10 to Jul-11</td> <td>9,132</td> <td>2,070,365</td> <td>1,189</td> </tr> <tr> <td>Overall</td> <td></td> <td></td> <td>Sep-01 to Jul-14</td> <td>113,762</td> <td>35,343,428</td> <td>7,139</td> </tr> </tbody> </table> Table 1: Summary data per project used in Phase-I Finally, if SBF provides a warning on a line and it appears unnatural to a language model, we may expect that this line is even more likely a mistake. We therefore investigate whether naturalness is a good ordering for warnings provided by static bug-finders. RQ5. Is "naturalness" a useful way to focus the inspection effort on warnings produced by SBF? lected project releases. The project snapshot sizes vary between 68 and 630K NCSL. The bugs were extracted from Apache’s JIRA issue tracking system; the bug count per release, across all the projects, varies from 24-194 (see Table 2). We further used warnings produced by two static bug finding tools: FINDBUGS [5] and PMD [10], as collected by Rahman et al.. PMD operates on source code and produces line-level warnings; FINDBUGS operates on Java bytecode and reports warnings at line, method, and class level. 3.2 Data Collection Phase-I Here we describe how we identified the buggy lines in a snapshot corresponding to the bugs that developers fixed in an ongoing development process. Estimating bug-fixing commits. Development time bug fixes are often not recorded in an issue database. Thus, to estimate bug fixing activities during an ongoing development process, we analyzed commit messages associated with each commit for the entire project evolution and looked for error related keywords. First, we converted each commit message to a bag-of-words and then stemmed the bag-of-words using standard natural language processing (NLP) techniques. Then similar to Mockus et al. [33], we marked a commit as a bug-fix, if the corresponding stemmed bag-of-words contains at least one of the error related keywords: ‘error’, ‘bug’, ‘fix’, ‘issue’, ‘mistake’, ‘incorrect’, ‘fault’, ‘defect’, ‘flaw’, and ‘type’. This method was adapted from our previous work [46]. For the Apache projects, as well as Atmosphere and Netty we further improved the classification with information available from the JIRA issue database. To evaluate the accuracy of the above classification, we manually verified the result for 300 commits (30 from each project, chosen randomly). Here, we only evaluated whether the author of a presumed bug-fix commit really marked their commit as a bug-fix. Out of these 300 commits, 288 were classified correctly (96%), 3 commits (1%) were described as a potential “issue” (thus may have developed into a bug later) and 9 commits (3%) were classified incorrectly—5 false negatives and 4 false positives. Thus, our approach achieves 96% accuracy (95% conf.int.: 93.8% to 98.2%). Selecting snapshots. To evaluate NBF in continuous active development, ideally we need to study all the commits made to a project for its full history. But using git blame to get line-level bug data at this scale is not feasible. Thus, we chose 1-year evaluation periods for each project. Since our focus is studying bugs, we considered the one-year period of each project that contained the most bug fixes. Within these periods, we looked at monthly snapshots thereby simulating near-continuous usage of our tool. For instance, snapshots were taken at 1-month intervals between 2006-07-06 and 2007-07-06 for project Derby (see Table 1). Identifying buggy lines in a snapshot. This part consists of three steps: (1) identifying lines related to a bug, (2) identifying commits that introduced the bug, and (3) mapping the buggy lines to the snapshots of interest. In step (1), we assumed that all the lines deleted (or changed) in a bug-fix commit were buggy lines. To find these lines, we looked at the versions of a file before and after a bug-fix. We used git diff to identify the deleted/changed lines in the old version and marked them as ‘buggy’; the added/changed lines in the new version are marked as ‘fixed’. Next, in step (2), we used git blame to locate the commits that had introduced these buggy lines in the system. The first two steps are analogous to the SZZ algorithm [50]. Once we know where the buggy lines originated, we used git-blame with ‘--reverse’ option to locate these lines in the snapshots of interest. This step ‘maps’ the buggy lines to specific snapshots. Figure 1 explains the procedure where commit c4 is a bug-fix commit. The corresponding buggy lines (marked red in the old version) are found to originate from two earlier commits c1 and c2. We then map the buggy lines from c1 to both S1 and S2, whereas the buggy lines from c2 are mapped only to S2. Note that we considered all the bugs that appeared at any time in the entire evolution and map them back to the snapshots of interest. However, we lose the buggy lines that were fixed before, or arose after, our study period as we cannot map these to any snapshots of interest. We also miss some transient bugs that appeared and were fixed within a snapshot interval (thus lasting less than a month). At the end of this step, we know exactly which lines in each of our snapshots are buggy (and were fixed in some future commit) and which ones are benign, modulo time-window censoring effects. Phase-II In Phase-II, we studied post-release bugs for the Apache projects, using Rahman et al.’s [43]’s dataset. Rahman et al. selected a number of release versions of each Apache project and, for each release, identified post-release bugfix commits from the JIRA issue tracking system. They then identified buggy and non-buggy lines for each release version similar to steps 2 and 3 of the previous section. 3.3 Entropy Measurement Choice of language model. We measured entropy using Tu et al.’s cache-based language model (Sgram) tool [53], as described in Section 2.1. We computed the entropy over each lexical token in all the Java files of all the snapshots. For a given file in a snapshot, the tool estimates a language model on all the other files of the same snapshot. It then builds a cache by running the language model on the given file, computing the entropy of each token based on both prolog (preceding tokens) and epilog (succeeding tokens). Finally, based on the training set and locally built cache, the tool computes the entropy of each token of the file; the line and file entropies are computed by averaging over all the tokens belong to a line and all lines corresponding to a file respectively. To generate entropies of the fixed lines, we leveraged the data-set gathered for the entire evolution period with 6 months interval, as mentioned in Section 3.1. This was necessary because a bug may get fixed after our studied period. For each bug-fix commit, we trained the dataset on its immediate preceding snapshot and tested it on the new file version corresponding to the bug-fix. Determining parameters for cache language model. Several factors of locality can affect the performance of the cache language model: cache context, cache scope, cache size, and cache order. In this work, we built the cache on the entire file under investigation. In this light, we only needed to tune cache order (i.e., maximum and minimum order of ngrams stored in the cache). In general, longer ngrams are more reliable but quite rare, thus we backed off to shorter matching prefixes (or suffixes) [25] when needed. We followed Tu et al. [53] to set the maximum order of cache ngrams to 10. To determine the minimum back-off order, we performed experiments on the Elasticsearch and Netty projects looking for optimal performance measured in terms of entropy difference between buggy and non-buggy lines. The maximum difference was found at a minimum back-off order of 4 with no change in the backoff weight. Thus, we set the minimum backoff order to 4, and the backoff weight to 1.0. Adjusting entropy scores. Language models could work for defect prediction at line granularity if bug-prone lines are more entropic. For instance, a non-buggy but high-entropy line would be a false positive and worsen the language model’s performance at the prediction task. For example, lines with previously unseen identifiers, such as package, class and method declarations, have substantially higher entropy scores on average. Vice versa, for-loop statements and catch clauses — being often repetitive — have much lower entropy scores. Such inter-type entropy differences do not necessarily reflect their true bug-proneness. In fact, for-statements, though less entropic, are often more bug-prone than the more entropic import-declarations. This observation led us to using abstract-syntax-based line-types and computing a syntax-sensitive entropy score. First, we used Eclipse’s JDT\(^2\) to parse an Abstract Syntax Tree (AST) of all files under consideration. Any line in a Java file is always contained by either a single AST node (e.g., Compilation Unit root-node) or several AST nodes in hierarchical order, e.g., a nested line with if-statement, method-declaration, and class-declaration. For each line, its syntax-type is the grammatical entity associated with the lowest AST node encompassing the full line. Examples include statements (e.g., if, for, while, return), declarations (e.g., variable, structure, method, import) or other AST nodes that tend to span one line, such as switch cases and annotations. We then computed how much a line’s entropy deviated from the mean entropy of its line-type using normalized Z-score: \[ z_{\text{line, type}} = \frac{\text{entropy} - \mu_{\text{type}}}{\text{std}_{\text{type}}} \] where \(\mu_{\text{type}}\) denotes mean \$gram entropy of all the lines of a given type, and \(\text{std}_{\text{type}}\) denotes standard deviation. This gave us a syntax-sensitive entropy model \$gram+type. The above normalization essentially uses the extent to which a line is “unnatural” w.r.t other lines of the same type. In addition, based on the fact that all line-types are not equally buggy, we computed relative bug-prominence of a type based on the fraction of bugs and total lines (LOC) it had in all previous snapshots. Here we used the previous snapshots as training set and computed bug-weight of a line-type as: \[ w_{\text{type}} = \frac{b_{\text{type}} / LOC_{\text{type}}}{\sum_{\text{type}} b_{\text{type}} / LOC_{\text{type}}} \] where the bugs and LOCs per type were counted over all previous snapshots. We then scaled the Z-score of each line by its weight \(w\) to achieve our final model, which we name \$gram+wType. Phase-1 and Phase-II data set and the entropy generation tool are available at http://odd-code.github.io. 4. Evaluation This section discusses the answers to Research Questions introduced in Section 2.4. The First two RQs are primarily based on the Phase-I data set. RQ3 uses Phase-II data to evaluate NBE’s capability as a File level defect predictor. Line level defect prediction is evaluated using both data sets (RQ3, RQ4, and RQ5). We begin with the question that is at the core of this paper: \(^2\)Java development tools, http://www.eclipse.org/jdt/ <table> <thead> <tr> <th>Bug-fix threshold</th> <th>buggy vs. non-buggy (RQ1)</th> <th>buggy vs. fixed (RQ2)</th> <th>Unique bugs detected</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1.95 to 2.00 0.61</td> <td>1.58 to 1.67 0.60</td> <td>14.92</td> </tr> <tr> <td>2</td> <td>1.68 to 1.72 0.53</td> <td>1.43 to 1.51 0.52</td> <td>23.93</td> </tr> <tr> <td>4</td> <td>1.37 to 1.40 0.43</td> <td>1.16 to 1.23 0.41</td> <td>34.65</td> </tr> <tr> <td>7</td> <td>1.15 to 1.17 0.36</td> <td>0.98 to 1.04 0.34</td> <td>43.94</td> </tr> <tr> <td>15</td> <td>0.91 to 0.93 0.29</td> <td>0.75 to 0.81 0.26</td> <td>55.91</td> </tr> <tr> <td>20</td> <td>0.81 to 0.83 0.25</td> <td>0.62 to 0.68 0.21</td> <td>60.16</td> </tr> <tr> <td>50</td> <td>0.58 to 0.59 0.18</td> <td>0.39 to 0.44 0.13</td> <td>71.68</td> </tr> <tr> <td>100</td> <td>0.55 to 0.57 0.12</td> <td>0.36 to 0.41 0.12</td> <td>80.96</td> </tr> <tr> <td>Overall</td> <td>0.86 to 0.87 0.26</td> <td>0.69 to 0.74 0.19</td> <td>100.00</td> </tr> </tbody> </table> Buggy lines have higher entropy than non-buggy lines. Buggy lines also have higher entropy than fixed lines. Entropy difference decreases as bug-fix threshold increases. Entropy differences are measured with t-test for 95% confidence interval and shows statistical significance (p-value < 0.05). Cohen’s d effect size: 0.2 = ‘small’, 0.5 = ‘medium’, and 0.8 = ‘large’. \(\text{RQ1. Are buggy lines less “natural” than non-buggy lines?}\) To evaluate this question, we compare entropies of buggy and non-buggy lines for all the studied projects. A Wilcoxon non-parametric test confirms that buggy lines are indeed more entropic than non-buggy lines with statistical significance \((p < 2.2\times10^{-16})\). Average entropy of buggy lines is 6.21 while that of non-buggy lines is 5.34. However, Cohen’s D effect size between the two is 0.26 (see last row in Table 3), which is considered small. One explanation for the small effect size across bugs of all sizes is an impact of tangled bug fixes—lines that are changed in a bug-fix commit but are not directly related to the bug. Herzig et al. [21] showed that around 17% of all source files are incorrectly associated with bugs due to such tangled changes. The impact of tangled changes on bug entropy is more visible for larger bug-fix commits. Some of the lines changed in a larger bug fix may not be directly associated with the erroneous lines and thus may not be “unnatural”. In contrast, for smaller bug-fix commits, say for 1 or 2 lines of fix, the fixed lines (i.e., the lines deleted from the older version) are most likely to be buggy. <table> <thead> <tr> <th>Line Entropy</th> <th>Buggy lines in a file</th> <th>Entropy difference between buggy vs. non-buggy on bug-fix threshold 7.</th> </tr> </thead> <tbody> <tr> <td>Non-fixed</td> <td>buggy</td> <td>fixed</td> </tr> <tr> <td>Low duration</td> <td>0.75</td> <td>0.56</td> </tr> <tr> <td>Medium duration</td> <td>0.98</td> <td>0.77</td> </tr> <tr> <td>High duration</td> <td>1.15</td> <td>0.96</td> </tr> </tbody> </table> \(\text{Figure 2.}\) To understand the effect of tangled changes on the naturalness of buggy code, we compute entropy difference between buggy and non-buggy lines at various bug-fix size thresholds. We define bug-fix threshold as the number of lines in a file that are deleted from the older version of a bug-fix commit. Table 3 shows the result. Both entropy difference and effect size decrease as the bug-fix threshold increases. For example, for a threshold size of 1, entropies of buggy lines are on average 1.95 to 2.00 bits higher than non-buggy lines (95% confidence interval). Cohen’s D effect size between the two lies between medium and large (0.61). 14.92% of the bug-fixes lie below this threshold. For a threshold size of 7, buggy lines have 1.15 to 1.17 bits higher entropy than their non-buggy counterpart. In this case, we see a small to medium effect (effect size = 0.36). \(^7\)5% of all changes (both bug-fixes and feature implementations) in our data set contain no more than 7 lines of deletion. This is also shown in Figure 2(a). At bug-fix threshold 100, we see only 0.55 to 0.57 entropy difference with small effect (0.17). These results indicate that the lines that are indirectly associated with real buggy lines in tangled bug-fix commits may have lower entropy, and thus would diminish the overall entropy of buggy lines. We further observe that bugs that stay longer in a repository tend to have lower entropy than the short-lived bugs. Bug duration of a buggy line is measured as the number of months until a buggy line is fixed starting from the day of its introduction (bug_duration = bug-fix date minus bug-introduction date). The following table shows the summary of bug duration in our data set (in months): <table> <thead> <tr> <th>Min.</th> <th>1st Qu.</th> <th>Median</th> <th>Mean</th> <th>3rd Qu.</th> <th>Max.</th> </tr> </thead> <tbody> <tr> <td>0.03</td> <td>8.33</td> <td>23.40</td> <td>29.03</td> <td>39.83</td> <td>125.6</td> </tr> </tbody> </table> Based on bug duration, we divide all the buggy lines into three groups: low, medium, and high. The bugs in low group “survived” for less than 9 months (1st quartile); bugs in medium group survived from 9 to 24 months (1st quartile to median), and the remaining bugs are in the high group. Figure 2(b) shows their entropy variation. The low group has significantly higher entropy than the medium and high group, with Cohen’s d effect size of 0.62 and 0.75 respectively (medium to large effect size). The difference is also confirmed with Wilcoxon non-parametric test with statistical significance. The medium group is also slightly more entropic than the high group with statistical significance, although the effect size is very small (0.10). These results indicate that the bugs that are fixed more quickly are more “unnatural” than the longer-lived bugs. We hope to explore the reasons in future work: perhaps the highly entropic bugs are easier to locate, diagnose and fix, and thus get speedily resolved—or perhaps (more intriguingly) highly-entropic bugs are easier to locate, diagnose and fix, and thus get speedily resolved—perhaps (more intriguingly) highly-entropic bugs are more strongly associated with failures that are more likely to be quickly encountered by users. In summary, we have the overall result: Result 1: Buggy lines, on average, have higher entropies, i.e. are “less natural”, than non-buggy lines. A natural question is whether the entropy of the lines in a bug drops once the bug is fixed. This leads us to the following question: RQ2. Are buggy lines less “natural” than bug-fix lines? To answer RQ2, we collected bug-fix commit patches of all the bugs that exist in any snapshot under study. In a bug-fix commit, the lines deleted from the original version are considered buggy lines and lines added in the fixed versions are considered fixed lines. We collected all such buggy and fixed lines for all the projects, as described in Section 3.2. Establishing a one-to-one correspondence between a buggy and fixed line is hard because buggy lines are often fixed by a different number of new lines. Hence, we compare the mean entropies between buggy and fixed lines across all the patches. Wilcoxon non-parametric test confirms that entropy of buggy lines, in general, drops after the bug-fixes with statistical significance (see Figure 2(a)). Similar to RQ1, tangled changes may also impact the entropies of bugs and their fixes. To measure the impact, we further compare the entropy differences between buggy and fixed lines at various bug-fix thresholds. Table 3 shows the result. Both the entropy difference and effect size decreases as bug-fix threshold increases. For example, at a bug-fix threshold of one line, average entropy drops, upon fixing, between 1.58 to 1.67 bits (95% confidence interval and with statistical significance). The Cohen d’s effect size is medium to large (0.60). However, with threshold size at 30, mean entropy difference between the two are slightly more than half a bit with a small effect size of 0.18. Such behavior suggests that tangled changes may be diluting the entropy of buggy code and their fixes. Bug duration also impacts the drop of entropy after bug fix. For bugs with low duration, the entropy drop is significant: 2.68 to 2.75 bits on average (effect size: 0.50). For medium duration bugs, entropy drops from 0.09 to 0.18 bits (effect size: 0.04), while entropy does not necessarily drop for high duration bugs. Table 4 shows three examples of code where entropy of buggy lines dropped significantly after bug-fixes. In the first example, a bug was introduced in Facebook-Android-SDK code due to a wrong initialization value—tokenInfo was incorrectly reset to null. This specific initialization rarely occurred elsewhere, so the buggy line had a rather high entropy of 9.21. The entropy value drops to 3.87 after the fix. These bugs evinced a large entropy drop after the fix. Bugs with only one defective line are shown for simplicity purpose. The errors are marked in red, and the fixes are highlighted in green. Table 4: Examples of bug fix commits that NBF detected successfully. These bugs evinced a large entropy drop after the fix. Bugs with only one defective line are shown for simplicity purpose. The errors are marked in red, and the fixes are highlighted in green. Example 1: Wrong Initialization Value Facebook-Android-SDK (2012-11-20) - File: Session.java - Entropy dropped after bugfix: 4.12028 ```java if (newState.isClosed()) { // Before (entropy = 6.07042): this.tokenInfo = null; // After (entropy = 1.95014): this.tokenInfo = AccessToken.createEmptyToken({Collections.<String>emptyList()}); } ``` Example 2: Wrong Method Call Netty (2013-08-20) - File: ThreadPoolEventLoopGroup.java - Entropy dropped after bugfix: 4.6257 ```java if (isTerminated()) { // Before (entropy = 5.96485): terminationFuture.setSuccess(null); // After (entropy = 1.33915): terminationFuture.trySuccess(null); } ``` Example 3: Unhandled Exception Lucene (2002-03-15) - File: FSDirectory.java - Entropy dropped after bugfix: 3.87426 ```java if (!directory.exists()) { // Before (entropy = 9.213675): directory.mkdir(); // After (entropy = 5.33941): if (!directory.mkdir()) throw new IOException("Cannot create directory: "+ directory); } ``` ... Table 5: Examples of bug fix commits where NBF did not perform well. In Example 4, NBF could not detect the bug successfully (marked in red) and after bugfix the entropy has increased. In Example 5, NBF incorrectly detected the line as buggy due to its high entropy value. Example 4: Wrong Argument (NBF could not detect) Netty (2010-08-26) File: httpMessageDecoder.java Entropy increased after bugfix: 5.75103 if {maxHeaderSize <= 0} { throw new IllegalArgumentException{ // Before (entropy = 2.696275) - "maxHeaderSize must be a positive integer: " + maxChunkSize); // After (entropy = 8.447305); + "maxHeaderSize must be a positive integer: " + maxHeaderSize); } Example 5: (NBF detected incorrectly) Facebook-Android-SDK (multiple snapshots) File: Request.java // Entropy = 9.892635 Logger logger = new Logger(LoggingBehaviors. REQUESTS, "Request"); ... bits. In this case, developer copied maxChunkSize from a different context but forgot to update the variable name. This is a classic example of copy-paste error [45]. Since, the statement related to maxChunkSize was already present in the existing corpus, the line was not surprising. Hence, its entropy was low although it was a bug. When the new corrected statement with maxHeaderSize was introduced, it increased the entropy. Similarly, in Example 5, the statement related to logger was newly introduced in the corpus. Hence, its entropy was higher despite not being a bug. However, for all bug-fix thresholds, Wilcoxon non-parametric test confirms with statistical significance that the entropy of buggy lines is higher than the entropy of fixed lines. Overall, we see 0.69 to 0.74 bit entropy drops after bug fixes with a small effect size of 0.19. Thus, in summary: Result 2: Entropy of the buggy lines drops after bug-fixes, with statistical significance. Having established that buggy lines are significantly less natural than non-buggy lines, we investigate whether entropy can be used to direct inspection effort towards buggy code. We start with the following research question: RQ3. Is “naturalness” a good way to direct inspection effort? Baseline: detecting Buggy Files. We first consider file-level defect prediction (DP), the de facto standard in the literature. Specifically, we evaluate whether ordering files by entropy will better guide us to identifying buggy files than traditional logistic regression and random forest based DP. DP is typically used at release-time to predict post-release bugs [35, 57, 42, 36, 11]; so, for this comparison we use the post release bug data collected in Phase-II. DP is implemented using two classifiers: logistic regression (LR), Random Forest (RF), where the response is a binary variable indicating whether a file is buggy or not. The predictor variables are the process metrics from [42, 11], such as developers, file-commit, code churn, and previous bug history; prior research shows that process metrics are better predictors of file level defects [42]. For each project, we train our model on one release and evaluate on the next release: a defect-proneness score is assigned to every file under test. We repeat this procedure for all releases for the projects under study. A file’s entropy is measured as the average entropy of all the lines in that file. We rank each file in each release based on entropy and logistic regression-based prediction score. Figure 3 shows the normalized AUCEC performance of all the classifiers, similar to [4]. Here, the y-axis shows the AUCEC scores as a fraction of the “perfect” score (files ranked using an oracle) for each model. At the higher inspection budget of 20% SLOC, the logistic regression and Random Forest DP models perform 56.5% and 30.4% better than the entropy-based model respectively. However, at the stricter inspection budget of 5% of SLOC, the entropy-based predictor performs 30% better than LR, and only 4.2% worse than RF, all are measured w.r.t. entropy-based predictor. Detecting Buggy Lines. Having shown that entropy can help detect bug-prone files, we now focus on a finer granularity: can the entropy of a line of code be used to direct inspection effort towards buggy files? Specifically, will ordering lines by entropy will guide inspection effort better than ordering lines at random? In all our experiments, the random baseline chooses lines at random from Non-Commented Source Lines (NCSL), picking just as many as NBF and SBF (in RQs 4&5). For the reasons outlined in Section 2.3, we evaluate the performance of entropy-ordering, with the AUCEC scores at 5% and 20% of inspected lines (AUCEC_{5/20} in short) according to two types of credit: partial and full (in decreasing order of strictness). Since this is a line-level experiment, comparing AUCEC values here with file-level optimum, as we did earlier in Figure 3, risks confusion arising from ecological inference [41], so we just present the raw AUCEC scores, without normalization. We further calculate AUCEC for different bug-fix thresholds, as entropy is better at predicting smaller bug-fixes. When measuring AUCEC at a threshold of, say, n lines, we ignore bug-fixes spanning ≥ n lines. Performance is evaluated in terms of percentage gain of AUCEC over random_AUCEC: \[ \text{gain} = \frac{\sum_{\text{project}}(\text{aucec} - \text{random}_{\text{aucec}})}{\sum_{\text{project}}\text{random}_{\text{aucec}}}. \] Figure 4(a) shows AUCEC_{20} scores for partial credit, averaged over all projects for bug-fix threshold 7. Under partial credit, the default $gram model (without the syntax weighting described in §3.3) performs better than random, particularly at ≥ 10% of inspected lines. At 20% of inspected line, $gram performs 41.95% better than random. Figure 4(b) focuses on the performance on 10 studied projects, up to 5% of the inspected lines. At this level, $gram’s performance varies. For projects Facebook, Netty, and Gzip $gram performs significantly better than random; but in other projects $gram either performs similar or worse than random. On closer examination, we found that some program constructs are intrinsically more entropic than others. For example, method declarations are often more entropic, because they are less frequent. This observation led us to consider syntactic line-type in bug prediction, as discussed in Section 3.3. Scaling the entropy scores by line type improves AUCEC_{5} performance in all but Facebook... scores increase as well. The Table 6: as the bug-fix threshold increases, the random AUCEC whereas small bugs are unlikely to be detected when inspecting a high likelihood even when inspecting only a few lines at random, therefore, the AUCEC thus, selecting dent of the bug fix threshold. Partial credit scoring assigns credit to approach yields constant AUCEC 5 tial credit respectively. These gains drop to 113 performance worsens with larger 5 scores of a random selection method under full credit will depend on the underlying distribution of bugs: large bugs are detected with a high likelihood even when inspecting only a few lines at random, whereas small bugs are unlikely to be detected when inspecting 5% of lines without a good selection function. This is reflected in Table 6: as the bug-fix threshold increases, the random AUCEC scores increase as well. The \( NBF \) approach, on the other hand, ex- and Atmosphere and significantly improves performance in all cases where $gram$ performed no better than random. Including the bugginess history of line-types ($gram+wType$) furthermore outperforms random and $gram$ in all but Lucene and Atmosphere and achieves an overall AUCEC\(_5\) scores 92.53% higher than random at bug-fix threshold 7. These results are similar under full credit (see Table 6). Since $gram+wType$ is the best-performing “naturalness” approach so far, we hereafter refer to it as $NBF$. Table 6: Performance evaluation of $NBF$ with random for different bug-fix threshold <table> <thead> <tr> <th>Bugfix</th> <th>Full Credit</th> <th>Partial Credit</th> </tr> </thead> <tbody> <tr> <td></td> <td>AUCEC(_5)</td> <td>AUCEC(_5)</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>2</td> <td>0.0035</td> <td>0.0016</td> </tr> <tr> <td>4</td> <td>0.0037</td> <td>0.0019</td> </tr> <tr> <td>7</td> <td>0.0038</td> <td>0.0023</td> </tr> <tr> <td>14</td> <td>0.0041</td> <td>0.0028</td> </tr> <tr> <td>all</td> <td>0.0051</td> <td>0.0051</td> </tr> </tbody> </table> Table 6 further shows that $NBF$ performance worsens with larger bug-fix thresholds, for both AUCEC\(_5\) and AUCEC\(_5\). For example, for AUCEC\(_5\), with bug-fix threshold 2, we see 122.74% and 113.46% performance gain over random AUCEC for full and partial credit respectively. These gains drop to 47.38% and 87.77% at a threshold of 14. Notice that, in Table 6, under Partial Credit, the random selection approach yields constant AUCEC\(_5\) and AUCEC\(_5\) scores, independent of the bug fix threshold. Partial credit scoring assigns credit to each line based on the size of the bug-fix that it is part of (if any); thus, selecting 5% of lines at random should, in expectation, yield 5% of the overall credit that is available (see also Section 2.3). Full Credit, on the other hand, assigns the credit for detecting a bug as soon as a single line of the bug is found. Therefore, the AUCEC scores of a random selection method under full credit will depend on the underlying distribution of bugs: large bugs are detected with a high likelihood even when inspecting only a few lines at random, whereas small bugs are unlikely to be detected when inspecting 5% of lines without a good selection function. This is reflected in Table 6: as the bug-fix threshold increases, the random AUCEC scores increase as well. The $NBF$ approach, on the other hand, ex- RQ4. How do $SBF$ and $NBF$ compare in terms of ability to direct inspection effort? To compare $NBF$ with $SBF$, we use $gram+wType$ model on Phase-II data set. To investigate the impact of tangled changes, we choose the overall data set and a bug-fix threshold of 14 (roughly corresponds to the fourth quartile of bug-fix sizes on this dataset). Further, we select PMD and FINDBUGS from a pool of available $SBF$ tools, because they are popular and have been studied in previous research [43, 23, 26]. As discussed in Section 2.3, Rahman et al. developed a measure named AUCECL to compare $SBF$ and $DP$ methods on an equal footing [43]. In this method, the $SBF$ under investigation sets the line budget based on the number of warnings it returns and the $DP$ method may choose a (roughly) equal number of lines. The models’ performance can then be compared by computing the AUCEC scores both approaches achieve on the same budget. We follow this approach to compare $SBF$ with $NBF$. Furthermore, we also compare the AUCEC\(_5\) scores of the algorithms. For the $gram+wType$ model, this is analogous to the results in RQ3. To acquire AUCEC\(_5\) scores for the $SBF$, we simulate them as follows: First assign each line the value zero if it was not marked by the $SBF$ and the value of the $SBF$ priority otherwise ([1, 2] for FINDBUGS, [1 - 4] for PMD); then, add a small random amount (tie-breaker) from \(U[0, 1]\) to all line-values and order the lines by descending value. This last step simulates the developer randomly choosing to investigate the lines returned by $SBF$: first from those marked by the $SBF$ in descending (native, $SBF$ tool-based) priority, and within each priority level at random. We repeat the simulation multiple times and average the performance. Figure 5(a) and 5(b) show the AUCEC\(_5\) and AUCECL scores for PMD using partial credit and at bug-fix threshold 14. The results for FINDBUGS were comparable, as were the results using full credit. As can be seen, performance varied substantially between projects and between releases of the same project. Across all releases and under both AUCEC\(_5\) and AUCECL scoring, all models performed significantly better than random (paired t-test: \(p < 10^{-3}\), with large effect (Cohen’s D > 1). SBF and NBF performed comparably; NBF performed slightly better when using both partial credit and the specified bug-fix threshold, but when dropping the threshold, and/or with full credit, no significant difference remains between NBF and SBF. No significant difference in performance was found between FindBugs and PMD either. In all comparisons, all approaches retrieved relatively bug-prone lines by performing substantially better than random. In fact, at 5% inspection budget, both the line-level NBF and the two SBF performed substantially better than the earlier presented DP method and file-level NBF (compare Figure 5(a) and Figure 3). **Result 4:** Entropy achieves comparable performance to commonly used Static Bug Finders in defect prediction. Notably, NBF had both the highest mean and standard deviation of the tested models, whereas PMD’s performance was most robust. This suggests a combination of the models: we can order the warnings of the SBF using the $gram+wType$ model. In particular, we found that the standard priority ordering of the SBF is already powerful, so we propose to re-order the lines within each priority category. **RQ5. Is “naturalness” a useful way to focus the inspection effort on warnings produced by SBF?** To answer this question, we again assigned values to each line based on the SBF priority as in RQ4. However, rather than add random tie-breakers, we rank the lines within each priority bin by the (deterministic) $gram+wType$ score. The results for PMD are shown in Figure 5, first using the AUCEC measure (5(a)) and then using the AUCECL measure (5(b)). PMD_Mix refers to the combination model as proposed. Overall, the combined model produced the highest mean performance in both categories. It significantly outperformed the two SBFs in all cases ($p < 0.01$) and performed similarly to the NBF model (significantly better on Lucene and QPID, significantly worse on Derby ($p < 0.05$), all with small effect). These results extended to the other evaluation methods, using full credit and/or removing the threshold for max bug-fix size. In all cases, the mix model was either significantly better or no worse than any of the other approaches when averaged over all the studied releases. We further evaluated ranking all warnings produced by the SBF by entropy (ignoring the SBF priorities) and found comparable but slightly weaker results. These results suggest that both NBF and SBF contribute valuable information to the ordering of bug-prone lines and that their combination yields superior results. **5. THREATS TO VALIDITY** **Internal Validity.** A number of threats to the internal validity arise from the experimental setup. First, our identification of buggy lines could be wrong, as we used a simple key-word based search to identify buggy commits (see Section 3.2). To minimize this threat, we manually evaluated our bug classification tool and reported an overall accuracy of 96%. Another source of false negatives is the presence of (yet) undetected bugs that linger in the code-base. Also, developers may “tangle” unrelated changes into one commit [20, 12]. However, given the high significance of entropy difference between buggy and non-buggy lines, we consider it unlikely that these threats could invalidate our overall results. Furthermore, NBF’s positive performance on (higher quality, JIRA-based) Phase-II dataset confirms our expectations regarding the validity of these results. A threat regarding RQ2 is the identification of ‘fixed’ lines, which replaced ‘buggy’ lines during a bugfix commit. The comparisons between these categories could be skewed, especially when the bugfix commits replace buggy lines with a larger number of fixed lines. In fact, small bug-fixes do indeed add more lines than they delete on average (the reverse holds for fixes spanning over 50 lines). However, a majority of bugs of any size were fixed with at most as many lines. In particular, more than two-third of one and two-line bugs, which demonstrated the greatest decrease in entropy were fixed with one and two lines respectively. Thus, these findings minimize the above threat. Other non-bugfixing changes (e.g., introduction of clone) may also show drop in entropy w.r.t. its previous version. Our comparison of SBF and NBF assumes that indicated lines are equally informative to the inspector, which is not entirely fair; NBF just marks a line as “surprising”, whereas SBF provides specific warnings. On the other hand, we award credit to SBF whether or not the bug has anything to do with the warning on the same lines; indeed, earlier work [52] suggests that warnings are not often related to the buggy lines which they overlap. So this may not be a major threat to our RQ4 results. Finally, the use of AUCEC to evaluate defect prediction has been criticized for ignoring the cost of false negatives [56]; the development of better, widely-accepted measures remains a topic of future research. **External Validity.** External Validity concerns generalizability of our result. To minimize this threat, we use systems from different versions. domain from Github and Apache, having a substantial variation in age, size and ratio of bugs to overall lines (see table 1). We also confirmed our overall result by studying entire evolutionary history of all the projects, analyzed with 6 months snapshot interval. Next, does this approach generalize to other languages? There is nothing language-specific about the implementation of n-gram and $gram models ($gram+wType model, however, does require parsing, which depends on language grammar). Prior research showed that these models work well to capture regularities in Java, C, and Python [22, 53]. To investigate $SBF$’s performance for other languages, we performed a quick sanity check on 3 C/C++ projects: (Libuv, Bitcoin and Libgit), studying evolution from November 2008 - January 2014. The results are consistent with those presented in Table 3 and Figure 2—buggy lines are between 0.87 and 1.16 bits more entropic than non-buggy lines at bug-fix threshold 15 (slightly larger than in Table 3). Also, the entropy of buggy lines at this threshold drops by nearly one bit. These findings strongly suggest that our results generalize to C/C++; we are investigating the applicability to other languages. Finally, even though we showed good empirical results with our approach, this does not assure that it actually helps developers in their bug finding efforts, as was shown in a similar scenario (with automated debugging techniques) by Parin et al. [40]. To tackle this, the natural next step would be a controlled experiment with developers using our approach. 6. RELATED WORK Statistical Defect Prediction. $DP$ aims to predict defects yet to be detected by learning from historical data of reported bugs in issue databases (e.g., JIRA). This is a very active area (see [8] for a survey), with even a dedicated series of conferences (i.e. PROMISE [1]). D’Ambros et al. survey and provide a direct comparison of a number of representative $DP$ approaches (including those using process metrics, such as previous changes [34, 17] and defects [27], and product metrics, such as code complexity metrics [6]). While earlier work evaluated models using IR measures such as precision, recall and F-score, more recently non-parametric methods such as AUC and AUCEC have gained in popularity. D’Ambros et al. follow this trend and conduct an evaluation similar to ours. A $DP$ may work beyond file granularity; Giger et al. presented a $DP$ at the level of individual methods [16]. We are the first to predict defects at a line-level using only statistical models. Static Bug Finders. $SBF$ can work at fine granularity as opposed to $DP$, hence the comparison with our approach. The work closely related to ours is by Rahman et al. [43]; by comparing $DP$ performance with $SBF$ they reported that popular $SBF$ tools like FindBugs and PMD do not necessarily perform better than $DP$. We also find that $SBF$ can be used to rank $SBF$ warnings, but we are not the first to tackle this challenge. Kremenek et al. use z-ranking and a cluster-based approach to prioritizing warnings based on the warnings’ previous success rate [29, 28]. Kim and Ernst mine information from code change history to estimate the importance of warnings [26]. Our approach ranks warnings based on properties of the source code rather than the output of the $SBF$ or whether and how warnings have been fixed in history. Future research could evaluate how our approaches can complement the work above. Ruthruff et al. propose a filtering approach to detecting accurate and actionable $SBF$ warnings [49]. They use priority of warnings, defined by the $SBF$, type of error detected, and features of the affected file (e.g., size and warning depth) to do the filtering. Our approach ranks warnings on a different aspect of source code than those they consider and could be used to complement their model. Finally, Heckman et al. proposed Faultbench, a benchmark for comparison and evaluation of static analysis alert prioritization and classification techniques [18] and used it to validate the Aware [19] tool to prioritize static analysis tool warnings. Since results of our approach are promising, further research could investigate our approach against this additional benchmark. The field of $SBF$ has advanced rapidly. with many developments; researchers identify new categories of defects, and seek to invent methods to find these defects efficiently, either heuristically or though well-defined algorithms and abstractions. Since neither method is perfect, the actual effectiveness in practice is an empirical question. A comprehensive discussion of related work regarding $SBF$ and their evaluation can be found in Rahman et al. [43]. Inferring rules and specifications. Statistical language models are employed to capture the repetitive properties of languages, in our case of programming languages, thus inferring the style or even some latent specification about how the language is supposed to be used in a specific project and context. As such, our work is related to previous research that tries to automatically infer specifications and use it to identify outliers as probable defects. Karmenek et al. present a framework based on a probabilistic model (namely, factor graph [31]) for automatically inferring specifications from programs and use it to find missing and incorrect properties in a specification used by a commercial static bug-finding tool [30]. In our case, we allow errors to be localized without language-specific tuning, without defining the set of annotations to infer, and without modeling domain-specific knowledge. Wasylykowski et al. mine usage models from code to detect anomalies and violations in methods invoked on objects and demonstrated that these can be used to detect software defects [54]. Similarly, Thummalapenta and Xie develop Alatins, an approach to mine patterns from APIs and detect violations [51]. In contrast, our model is less specialized and tries to highlight unexpected patterns at token level, without focusing on the specific case of method invocations. 7. CONCLUSION The predictable nature (“naturalness”) of code suggests that code that is improbable (“unnatural”) might be wrong. We investigate this intuition by using entropy, as measured by statistical language models, as a way of measuring unnaturalness. We find that unnatural code is more likely to be implicated in a bug-fix commit. We also find that buggy code tends to become more natural when repaired. We then turn to applying entropy scores to defect prediction and find that, when adjusted for syntactic variances in entropy and defect occurrence, our model is about as cost-effective as the commonly used static bug-finders PMD and FindBugs. Applying the (deterministic) ordering of entropy scores to the warnings produced by these static bug-finders produces the most cost-effective method. These findings suggest that entropy scores are a useful adjunct to defect prediction methods. The findings also suggest that certain kinds of automated search-based bug-repair methods might do well to have the search in some way influenced by language models. In the near future, we plan to build extensions into PMD, FindBugs and other static bug finders that order warnings based on our $gram+wType$ regime. Further ahead, we plan to study other applications of these approaches including dynamic fault-isolation methods and automated bug-patching tools. Acknowledgment. This material is based upon work supported by the National Science Foundation under Grant No. 1414172. 8. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/9302738/icse2016a.pdf", "len_cl100k_base": 15404, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47166, "total-output-tokens": 18724, "length": "2e13", "weborganizer": {"__label__adult": 0.0003666877746582031, "__label__art_design": 0.0002722740173339844, "__label__crime_law": 0.0002868175506591797, "__label__education_jobs": 0.0008139610290527344, "__label__entertainment": 4.887580871582031e-05, "__label__fashion_beauty": 0.0001519918441772461, "__label__finance_business": 0.00019609928131103516, "__label__food_dining": 0.00026416778564453125, "__label__games": 0.0005121231079101562, "__label__hardware": 0.0005750656127929688, "__label__health": 0.0003736019134521485, "__label__history": 0.0001895427703857422, "__label__home_hobbies": 7.891654968261719e-05, "__label__industrial": 0.0002453327178955078, "__label__literature": 0.00024366378784179688, "__label__politics": 0.00020742416381835935, "__label__religion": 0.0003619194030761719, "__label__science_tech": 0.005329132080078125, "__label__social_life": 8.243322372436523e-05, "__label__software": 0.004119873046875, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002627372741699219, "__label__transportation": 0.0003936290740966797, "__label__travel": 0.00017762184143066406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71237, 0.06513]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71237, 0.27769]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71237, 0.8963]], "google_gemma-3-12b-it_contains_pii": [[0, 837, false], [837, 6118, null], [6118, 13585, null], [13585, 18368, null], [18368, 24925, null], [24925, 33151, null], [33151, 39420, null], [39420, 45877, null], [45877, 52006, null], [52006, 57160, null], [57160, 64747, null], [64747, 71237, null], [71237, 71237, null]], "google_gemma-3-12b-it_is_public_document": [[0, 837, true], [837, 6118, null], [6118, 13585, null], [13585, 18368, null], [18368, 24925, null], [24925, 33151, null], [33151, 39420, null], [39420, 45877, null], [45877, 52006, null], [52006, 57160, null], [57160, 64747, null], [64747, 71237, null], [71237, 71237, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71237, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71237, null]], "pdf_page_numbers": [[0, 837, 1], [837, 6118, 2], [6118, 13585, 3], [13585, 18368, 4], [18368, 24925, 5], [24925, 33151, 6], [33151, 39420, 7], [39420, 45877, 8], [45877, 52006, 9], [52006, 57160, 10], [57160, 64747, 11], [64747, 71237, 12], [71237, 71237, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71237, 0.12844]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e84e7d7982b5aaaf1c17151d97aa3e62c32d00f5
Disciplina: Blockchain for Education Kirill Kuvshinov\textsuperscript{1}, Ilya Nikiforov\textsuperscript{2}, Jonn Mostovoy\textsuperscript{3}, Dmitry Mukhutdinov\textsuperscript{4}, Kirill Andreev\textsuperscript{5}, and Vladislav Podtelkin\textsuperscript{6} \textsuperscript{1,2}Teach Me Please, \url{https://teachmeplease.com} \textsuperscript{3,4,5,6}Serokell, \url{https://serokell.io} Abstract In this paper we analyze the main issues that arise from storing educational records in blockchain and propose the architecture of the Disciplina platform – a domain-specific blockchain implementation. The platform is designed to act as a decentralized ledger, with special regard for privacy and mechanisms of data disclosure. We present an overview of the main entities, their roles and incentives to support the network. Please note that the project is a work-in-progress and the descriptions provided are subject to change. 1 Introduction Recent advances in blockchain technology and decentralized consensus systems open up new possibilities for building untamperable domain-specific ledgers with no central authority. Since the launch of Bitcoin \cite{bitcoin}, blockchains had been primarily used as a mechanism for value transfers. With the growth of the Ethereum platform \cite{ethereum}, the community realized that by using a chain of blocks and consensus rules one can not only store value and track its movement, but, more generally, store some state and enforce conditions upon which this state can be modified. Bitcoin, Ethereum and other permissionless blockchains were developed with the assumption that everyone is free to join the network and validate transactions, that are public. However, the industry often requires privacy, and thus the permissive solutions with private ledgers came to exist. These solutions include Tendermint \cite{tendermint}, Hyperledger \cite{hyperledger}, Kadena \cite{kadena} and others. The increased interest and the variety of the blockchain technologies lead to the growth of their application domains. The idea of storing educational records in the blockchain has been circulating in the press and academic papers for several years. For example, \cite{online_education} and \cite{smart_contracts} focus on the online education and propose to create a system based on the educational smart contracts in a public ledger. Recently, Sony announced a project that aims at incorporating educational records in a permissioned blockchain based on Hyperledger \cite{sony}. The ledger is going to be shared between major offline educational institutes. The main issue these solutions have in common is that they target a certain subset of ways people get knowledge. We propose a more general approach that would unite the records of large universities, small institutes, schools and online educational platforms to form a publicly verifiable chain. Contrary to the solutions like Ethereum, we do not aim at proposing a programmable blockchain that fits all the possible applications. Rather, we believe, that we should harness all the latest knowledge that emerged in the last few years in the fields of consensus protocols, authenticated data structures and distributed computations to offer a new domain-specific ledger. In this paper we introduce Disciplina — the platform based on blockchain technology that aims to transform the way educational records are generated, stored and accessed. 2 Architecture overview Due to the nature of the platform, it has to operate on sensitive data, such as courses, assignments, solutions and grades. Permissionless blockchains, like Ethereum or EOS, would require disclosing this data to the public, whereas the permissive ones, like Hyperledger, lack public verifiability. Our architecture splits the blockchain into two layers: the private layer contains sensitive data, and the public one contains the information necessary to validate the integrity and authenticity of the private blocks. The key entities of the proposed blockchain architecture are presented in Figure 1. ![Diagram](image) **Figure 1: Key entities of the Disciplina platform** The private layer is maintained by each Educator independently of others. Educators can be either large educational institutes, capable of running their own nodes, or some trusted party that runs the chain for the self-employed teachers and small institutions. This layer contains the personalized information on the interactions between the students and the Educator. All the interactions, such as receiving an assignment, submitting solutions, or being graded, are treated as transactions in the private chain. Students get access to the platform through web and mobile applications. Using the applications they choose Educators, enroll in courses, get assignments and submit solutions. The scores and the criteria of whether the Student has finished the course successfully are determined by the Educator. The education process from the platform’s perspective is as follows: 1. A Student chooses an Educator and a course that she wants to enroll in. 2. If the course is offered on a pre-paid basis, the Student uses her app to pay the fee. 3. During the course, the Educator provides assignments that the Student has to complete in order to get the score. 4. The Student acquires the assignment, completes it and sends the signed solution back to the Educator (communication between the Student and the Educator happens off-chain). 5. The Educator then stores the solution locally, grades it with a score in range [0..100], and transfers the score with the hash of the solution to the blockchain. 6. Upon the completion of the course, the Student acquires a final score based on the scores she got for her assignments. This final score is also added to the Educator’s chain. Making the Educators’ chains private opens the possibility for Educators to tamper with the data in their chains. To overcome this issue and make the private transactions publicly verifiable, we introduce the second, public, layer of the blockchain. The public part of the network consists of Witnesses — the special entities that witness the fact that a private block was produced by an Educator. They do so by writing the authentication information of a private block into the public chain, which is used in the future by an arbitrary Verifier to substantiate a proof of transaction inclusion. given to it by a Student or an Educator. Witnesses also process public information issued by the Educators, such as announcements that an Educator has started or stopped offering a course in a particular discipline. The Witnesses agree on which public blocks are valid using the specified consensus rules. The Recruiters are the entities interested in gathering data about students from educational institutions. They buy this data from Educators using a secure data disclosure protocol, described in detail in section 3.7. Validity and security of every data trade is also ensured by Witnesses, because corresponding transactions and actions of each party are also stored in public blockchain. 3 Implementation choices In this section we describe the proposed architecture in more detail. We present the excerpt on the internal structure of both public and private chains and the reasoning behind these choices. In order to deduce the internal structure of our system, we first analyze its use-cases. The overview of the education process is given in Section 2. The communication between the Student and the Educator is saved as transactions in the private chain. However, the implementation details of this chain mostly depend on the data disclosure process. We will start from analyzing this process and determining the main issues that arise from the need to disclose and verify the validity of the private blocks. Then we will propose the structure of the private and public blocks that addresses these issues. 3.1 Anonymity and certification The permissionless nature of our public chain leads to the ability for malvolent students to create educational institutes in order to get the scores for the courses they did not attend. Moreover, the knowledge students actually get by completing the course, and the conditions upon which the course is considered completed, vary significantly between the educational institutions. These issues currently can not be solved solely on the protocol level: they require an external source of information to determine the physical existence and the reputation of an Educator. Although we leave the public chain open for the Educators to submit their private block headers, we propose to add a separate layer of reputation and trust on top of the protocol. We do so by disallowing a new Educator to join the network without an approval from another Educator. Educators are supposed to rate another Educators basing on off-chain sources of information – such as a publication on an official site of a university, which claims that given Disciplina public key is issued by this university. By approving each other, Educators form a web of trust. Ratings of Educators are backed up by ratings of Educators which trust them. 3.2 Activity Type Graph When a Recruiter makes a request to one of the Educators, the Educator has to provide as minimal set of entries as possible. This set has to be verifiable, which means that the Educator provides the proof of the data validity along with the data being disclosed. In order to achieve these goals, we divide the data that the Educators store into atomic Activity Types. Each Educator maintains a journal of transactions per each Activity Type that the Educator offers. All the Activity Types are grouped into courses that are further grouped into larger entities such as subjects and areas of knowledge. This grouping can be stored as the Activity Type Graph $G_A$ with the following properties: 1° $G_A$ is a directed graph: $$G_A : \langle V : \{\text{Vert}\}, e_{out} : \text{Vert} \to \{\text{Vert}\} \mid \text{rest}\rangle$$ 2° Each vertex of $G_A$ is associated with depth: $$G_A : \langle d : \text{Vert} \to \text{Int} \mid \text{rest}\rangle$$ 3° Law of pointing down: $$G_A : \langle v \in e_{out}(u) \implies d(v) > d(u)\rangle$$ $G_A$ has special *et cetera* vertices $u$: $$\forall v \in V \exists u (u \in e_{out}(v) \land e_{out}(u) = \emptyset)$$ \hspace{1cm} (4) The example of the Activity Type Graph (ATG) is shown in Figure 2. The vertex $v$ of the graph is a *leaf* if $e_{out}(v) = 0$. Otherwise we call it an *internal vertex*. Every internal vertex of the graph has a special *etc.* child (some of these are omitted in the figure). ![Activity Type Graph](image) Figure 2: An example of the Activity Type Graph. Some of the vertices are not shown. The need for *etc.* vertices arises from the fact that not all of the Educators teach courses exactly in leaves — some of them offer general courses that provide just the necessary background. For example, some of the universities teach the basic “Computer science” course, that contains the basics of the discipline. In this case, when the particular category is hard to define, the university would use the etcComputerScience vertex. On the protocol level, the Educators can announce that they teach a particular course, but can not modify the Activity Type Graph structure. The structure of the graph is maintained by the core developers and updated upon request from the Educators. For every pair of vertices $(v, u)$, $weight(v, u)$ defines how the score of a course from the field of study $u$ affects the summary grade for the field of study $v$. Let’s define $weight(v, u) = (d(u) - d(v) + 1)^{-1}$ if $u$ reachable from $v$, and $weight(v, u) = 0$ otherwise. The motivation of the aforementioned weights is that less specific subject implies the wider knowledge. After that we can define $avgGrades_{subjectId}$ as a weighted average with weights described above. ### 3.3 Search queries An educator can answer one of the following queries: - For a set of pairs $(subjectId_1, minGrade_1)$, $(subjectId_2, minGrade_2)$, ..., $(subjectId_n, minGrade_n)$ and some $Count$, find no more than $Count$ students with grades satisfying the following inequalities: $$\begin{align*} avgGrade_{subjectId_1} &\geq minGrade_1 \\ avgGrade_{subjectId_2} &\geq minGrade_2 \\ \vdots \\ avgGrade_{subjectId_n} &\geq minGrade_n \end{align*}$$ - For the given identifier of a student, return all info about this student. - For given assignment hash, return the document itself. 3.4 Private chain Every educator has a private chain. It stores the data about students, and can generate answers for the queries described above. Private chain comprises of two main data structures: - Set of transactions batched into blocks. Every block contains a list of transactions packed into a Merkle tree. - Links to the transactions stored in the B+-tree with keys (studentId, studentGrade). Indexes constructed in such a way that more popular activities go first. The structure of the private block is shown in Figure 3. The block consists of a public header that the Educators relay to the Witnesses, and the private body that remains in the educational institute until it receives a data disclosure request. ![Figure 3: Private block structure](image) During the educational process the Educators emit atomic private transactions. These transactions represent the modifications to the journal of academic achievements (thus, making a transaction means appending the data to the journal). The transactions can be of the following types: - student enrolls in a course; - student gets an assignment; - student submits an assignment; - student gets a grade for an assignment; - student gets a final grade for the course. The first two types should be initiated by a student, and should include student’s signature to prevent spam from partially-honest educator. The structure of the transaction is shown in Figure 4. Let us denote an \(i\)-th transaction in a block as \(T^i_{priv}\). The Educators group the transactions that occurred during the current block time slot, and construct a Merkle tree [8] for these journal modifications: \[ M_{priv} = \text{mtree}([T^i_{priv}]) \] The Educator’s private block body comprises an ordered set of Merkle-authenticated transactions. These transactions are indexed so that the Educator can quickly find a particular transaction that satisfies some predicate. The private block header consists of the transactions Merkle root along with the previous block hash and the information on the Activity Type Graph modifications (ATG delta). The ATG delta part allows the Educators to inform the Witnesses of the modifications to the courses they teach. An Educator collects private transactions into the blocks with no more than $K_{max}$ transactions per each block. After that, an Educator submits signed block header to the Witnesses so that private transactions can be confirmed by the public chain. Thus, the private blocks form a publicly verifiable chain of events. To incentivize Witnesses to include private block headers into the public chain, an Educator should pay some amount of coins per each private block. We should take into consideration that an educator may be both a local tutor and some big university. Depending on that, a number of transactions per each block, as well as paying capacity, may differ. So the cost of a digest publication should linearly grow with the size of a block. Let the cost for publishing a public block header be $$C_{pub}(B) = \alpha_{pub} + \beta_{pub} \cdot N_{tr}(B)$$ , where $N_{tr}(B)$ is the number of transactions in private block $B$ and $\alpha_{pub}$ and $\beta_{pub}$ are parameters of the network – a small constant fee and a linear price coefficient accordingly. To achieve the ability to prove the number of transactions in the Merkle tree, we will store it together with a hash in each node (as shown in Figure 5). The proof will only be valid if the Educator fills in the sizes correctly, so there is no incentive for Educators to lie about the size of the tree. We also consider a possibility for small educators to form pools and release blocks together in order to reduce costs for each individual educator. See appendix A.2 for details. ### 3.5 Public chain The Witnesses maintain a public chain – a distributed ledger that contains publicly available information. If one wishes to perform a transaction on the public chain, she has to pay a certain fee that serves two purposes. First of all, the fee incentivizes the Witnesses to participate in the network and issue new blocks. Second, by requiring a fee for each transaction, we protect the public ledger from being spammed. We present the structure of the public blocks in Figure 6. The public ledger contains the following information: 1. Modification history of the Activity Type Graph. 2. Private block headers. 3. Account balances and value transfer history. ![Figure 6: Public block structure](image) There are two major ways to store the account balances and other state information: UTXO and account-based architectures. UTXO is an unspent transaction output, that contains some predicate – a condition that has to be fulfilled in order to spend the coins. To prove the money ownership, the spender provides a witness – an input that makes the predicate true. Thus, the UTXO-based architecture requires the transactions to be stateless, effectively limiting the application domain [1]. The unspent outputs with an associated state can be treated as smart-contracts in the account-based architectures like Ethereum [13]. The state is stored in an off-chain storage – the state database. The transactions are treated as the modifications of the world state. Disciplina uses an account-based ledger with contracts programmable in Plutus language [7]. Each account has an associated state, which comprises the account balance and other information (e. g. log \( L \) of a data disclosure contract). The world state is a mapping between accounts and their states. In order to make this mapping easily verifiable, we use a structure called the authenticated AVL+ tree introduced in [10]. The recent achievements in the field of consensus protocols, like the provably secure Ouroboros [5], allow us to build a public chain based on the Proof of Stake consensus rules. Thus, we can increase the transaction speed and drop the need for the expensive mining. ### 3.6 Fair CV One of the main goals of the Disciplina platform is to provide a way for the Students to easily prove their educational records. We propose to duplicate the records in the Student’s digital CV. This CV contains all the records that the parties have generated during the Student’s educational process along with the validity proofs of that data (see Figure 7). In order to prove that some transaction actually occurred in some private block of the concrete Educator, the student has to provide the cryptographic proofs along with the actual data. The cryptographic proof of the inclusion of an element in an authenticated data structure is generally a path of hashes. Let us denote the path of the element \( e \) in some authenticated data structure \( X \) as \( \text{path}(e, X) \). Thus, the Student has to provide the following data: - The Student’s and the Educator’s public keys \( pk_S \) and \( pk_E \). - The course \( a_i \) and the a private transaction \( T_{priv} \) with the score. - The Merkle path of the transaction in the journal: \( P_{priv} = \text{path}(T_{priv}, M_{priv}) \), where \( M_{priv} \) is a Merkle tree of the transactions in the private block. - The public block number \( H \) and the Merkle path of the transaction \( T_{pub} \) that pushed the private block into the public chain: \( P_{pub} = \text{path}(T_{pub}, M_{pub}) \), where \( M_{pub} \) is a Merkle tree of the transactions in the block \( H \). Having this data one can prove the occurrence of a certain transaction in one of the Educator’s private blocks without the need to request any data from the Educator during the validation process. Thus, any party can check the validity of the Student’s CV for free if the Student wishes to disclose it. Let \( \rho(e, P) \) be the function that substitutes the element \( e \) in path \( P \) and computes the root hash of the authenticated data structure. Then the validation process is as follows: 1. Query the public chain to find the block \( H \) and obtain the Merkle root of the transactions: \( \text{root}(M_{pub}) \). 2. Check whether \( \rho(T_{pub}, P_{pub}) = \text{root}(M_{pub}) \). 3. Check that the public transaction \( T_{pub} \) was signed with the Educator’s public key \( pk_E \). 4. From the public transaction \( T_{pub} \) obtain the Merkle root of the private transactions: \( \text{root}(M_{priv}) \). 5. Check that \( \rho(T_{priv}, P_{priv}) = \text{root}(M_{priv}) \). These validation steps can prove that an Educator with a public key \( pk_E \) issued a transaction \( T_{priv} \) in one of its private blocks. One can attribute the \( pk_E \) to a particular real-world educational institution by checking the Educator’s certificate as described in Section 3.1. Disciplina architecture supports two types of data disclosure requests: 1. Request for a set of authenticated private transactions satisfying some predicate (see details in Section 3.3) 2. Request for object disclosure Here we describe a protocol of fair data trade between the Educator as a seller and some interested party as a buyer. Despite a few variations the protocol is almost the same for all three types of the data disclosure requests. We first lay out the private transactions disclosure protocol. Then we describe modifications to the protocol so that one can apply it to other types of data. The process of data disclosure involves direct communication between a particular Educator $E$, willing to disclose a part of the data, and an interested party $B$ (e. g. a recruiter), willing to pay for this data. Suppose $E$ has some data $D$. In case of private transactions $D$ is a set of authenticated transactions, i. e. tuples $(T_{priv}, P_{priv}, H, P_{pub})$. As shown in Section 3.6 this data along with the educator’s public key is enough to prove that a certain transaction $T_{priv}$ actually occurred in some private block of the given educator. The protocol fairness is guaranteed by a contract on the public chain. The contract is able to hold money and is stateful: it is capable of storing a log $L$ with data. All the data that parties send to the contract is appended to $L$. 1. The buyer $B$ sends a signed search query $\text{Sign}_B(Q)$ directly to the seller $E$. 2. Let $D$ be a set of authenticated transactions relevant for the query $Q$. $E$ divides $D$ into $N$ chunks. When disclosing private transaction, one chunk $d_i$ is a transaction with proofs that it was included in a certain private block: \[ d_i : (T_{priv}^i, P_{priv}^i, H^i, P_{pub}^i) \] (7) 3. $E$ generates a symmetric key $k$ and encrypts each $d_i$ with $k$. Then she makes an array of encrypted chunks: \[ D_{\bullet} = \{E_k(d_1), E_k(d_2), ..., E_k(d_N)\} \] (8) 4. $E$ computes the size of the encrypted answer $s = \text{sizeof}(D_{\bullet})$, the cost of this data $C_D \sim s$, and the Merkle root of the data $R = \text{root}(\text{mtree}(D_{\bullet}))$. 5. $E$ sends $\text{Sign}_E(C_D, s, R, H(Q))$ directly to the buyer. 6. If buyer agrees to pay the price, she generates a new keypair $(pk_B, sk_B)$. Then she initializes the contract with the data provided by the Seller, search query $Q$, its own temporary trade public key $pk_B$ and $C_D$ amount of money. 7. If $E$ agrees to proceed, she sends a predefined amount of money $C_E$ to the contract address. $C_E$ is a security deposit: if $E$ tries to cheat, she would lose this money. 8. $E$ transfers the encrypted data chunks $D_{\bullet}$ directly to the buyer. $B$ computes the Merkle root $R'$ and the size $s'$ of the received data $D_{\bullet}'$: \[ R' = \text{root}(\text{mtree}(D_{\bullet}')) \] (9) \[ s' = \text{sizeof}(D_{\bullet}') \] (10) 9. $B$ makes a transaction with a receipt $\text{Sign}_B(\{R', s'\})$ to the contract address. The parties can proceed if and only if the following is true: \[ (R' = R) \land (s' = s) \] (11) Otherwise, the protocol halts. 10. $E$ sends $\text{Sign}_E(E_k(k))$ to the contract. 11. $B$ decyphers and checks the received data. In case all the data is correct the Buyer sends a signed accept to the contract. In case some data chunk $e_i \in D$ is invalid, $B$ sends $$\text{Sig}_B(\{ sk_B, e_i, \text{path}(e_i, \text{mtree}(D)) \})$$ to the contract. By doing so, $B$ reveals the data chunk $d_i$ corresponding to the encrypted chunk $e_i$. She also shares proof that $e_i$ was indeed part of a Merkle tree with root $R$. The contract checks the validity of $d_i$ and decides whether $B$ has rightfully accused $E$ of cheating. In case chunks $d_i$ and $d_j$ have duplicate entries, $B$ sends $$\text{Sig}_B(\{ sk_B, e_i, \text{path}(e_i, \text{mtree}(D)), e_j, \text{path}(e_j, \text{mtree}(D)) \})$$ to the contract. The contract checks whether $d_i$ and $d_j$ do indeed have duplicate entries and blames $E$ for cheating if it is true. The contract considers the data chunk $d_i$ valid if and only if: 1. The transaction in $d_i$ is unique. 2. The transaction in $d_i$ has valid proofs of existence (like described in Section 3.6). 3. The transaction in $d_i$ make the predicate $Q$ true. The on-chain communications of the parties (steps 7, 9, 10, 11) are bounded by a time frame $\tau$. In order for the transaction to be valid, the time $\Delta t$ passed since the previous on-chain step has to be less than or equal to $\tau$. In case $\Delta t > \tau$ the communication between the parties is considered over, and one of the protocol exit points is automatically triggered. The protocol exit points are described in detail in Table 1. <table> <thead> <tr> <th>Condition</th> <th>Step</th> <th>Consequence</th> </tr> </thead> <tbody> <tr> <td>$\Delta t &gt; \tau$</td> <td>7</td> <td>$B, E$ get their money back because $E$ wasn’t able to correctly transfer the data to $B$.</td> </tr> <tr> <td>$\Delta t &gt; \tau$</td> <td>9</td> <td>$R' \neq R$</td> </tr> <tr> <td>$R' \neq R$</td> <td>9</td> <td>$s' \neq s$</td> </tr> <tr> <td>$\Delta t &gt; \tau$</td> <td>10</td> <td>$B, E$ get their money back because $B$ has received the encrypted data, but $E$ nas not been able to share the key $k$ for it</td> </tr> <tr> <td>$\Delta t &gt; \tau$</td> <td>11</td> <td>accept from $B$</td> </tr> <tr> <td>$\Delta t &gt; \tau$</td> <td>11</td> <td>reject from $B$</td> </tr> <tr> <td>$E$ gets $C_E$ and $C_D$: $E$ correctly shared data to $B$</td> <td></td> <td></td> </tr> <tr> <td>The dispute situation. In case $B$ proofs $E$ cheated, $E$ loses her security deposit $C_E$. Otherwise, $E$ receives both $C_E$ and $C_D$.</td> <td></td> <td></td> </tr> </tbody> </table> The proposed algorithm (though with some modifications) can be applied to object disclosure requests. Here we define these modifications: - $Q : \text{root}(\text{mtree}(\text{Object}))$ – query by the object hash. - $d_i : (\text{chunk}, \text{path}(\text{chunk}, \text{mtree}(\text{Object}))$ – the data being revealed is an object: uncategorized blob of data relevant to a particular transaction. The object is split into chunks of size no more than 1 KiB and transferred along with proofs. - Validation: check that a chunk is indeed a part of the object with root $Q$. 4 Future work The current architecture of the Disciplina platform heavily relies on the fact that a new Educator should gain acceptance from other Educator to join the network, and ratings of a new Educator are determined by other Educators accepting it. However, it is possible that other Educators would provide unfair ratings: for example, they could ignore the existence of private teachers, thus making their contributions less valuable, or purposefully lower the ratings of competitors entering the network. Such problems can be avoided if we carefully integrate the algorithm of rating computation into our architecture. The ratings would be based on the on-chain sources of information and provide equal opportunities for both private teachers and large educational institutions. However, integrating the rating system into the architecture poses several design challenges that we have to solve. 5 Conclusion In this paper we presented the architecture of the Disciplina platform. The described architecture provides a way to store educational records in the blockchain while preserving the privacy of these records. The concepts of private chains and a digital CV make it possible to verify the educational records of a particular person. Educational institutions are connected in a web of trust to provide credibility for each institution and, consequently, to digital CVs of their alumni. We developed our platform not only as the source of trust, but also as a database of the students from all over the world. We believe that the data that is stored in the system has a value in itself. The need to disclose this data was also addressed in the paper: we described a mechanism for the fair data trade and the measures against the secondary market creation. References A Appendix A.1 Notations <table> <thead> <tr> <th>Notation</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>A party that takes part in the protocol</td> </tr> <tr> <td>( H(m) )</td> <td>Result of applying a collision-resistant hash-function ( H ) to a message ( m )</td> </tr> <tr> <td>mtree( (a) )</td> <td>Merkle tree of the data array ( a )</td> </tr> <tr> <td>root( (M) )</td> <td>Root element of the Merkle tree ( M )</td> </tr> <tr> <td>path( (e, M) )</td> <td>Path of the element ( e ) in the Merkle tree ( M )</td> </tr> <tr> <td>( k )</td> <td>Symmetric key</td> </tr> <tr> <td>pk( A ), sk( A )</td> <td>Public and secret keys of ( A )</td> </tr> <tr> <td>( E_k(m) )</td> <td>Symmetric encryption with the key ( k )</td> </tr> <tr> <td>( E_A(m) )</td> <td>Asymmetric encryption with the key ( pk_A )</td> </tr> <tr> <td>( \text{Sig}_A(m) )</td> <td>Tuple ( (A, m, \text{sig}(sk_A, H(m))) ), where ( \text{sig} ) is a digital signature algorithm(^1)</td> </tr> <tr> <td>sizeof( (m) )</td> <td>Size of ( m ) in bytes</td> </tr> <tr> <td>( \oplus )</td> <td>Binary string concatenation</td> </tr> </tbody> </table> A.2 Partially centralized educators In the Section 3.4 we have concluded that the cost of the private block proof publication should depend on the size of the corresponding Merkle tree. This is done in order to scale spendings of different educators with amount of data they produce and store in their private blockchains. But this solution has a disadvantage: the Witnesses are more incentivized to include proofs from large educators in public blocks rather than from small educators, as proofs from large educators contain more fees. If block size is limited, it may lead to delays of inclusion of small educators’ proofs in the public blockchain. In order to resolve this problem, small educators can use trusted third-party services (e. g. teachmeplease.com) for interacting with Disciplina platform instead of running Educator nodes by themselves. But this means that third-party service has access to all the educator’s data, including marks and assignments of her students, and also receives all the revenue for trading this data. Some small educators might find this option unacceptable. Therefore, we propose a mechanism of educator pools, which allow small educators to delegate block proof publishing to a third party in a partially trustless way. The idea is the following: - Every small Educator still maintains her own small private chain - When a small Educator forms a block in her private chain, she sends the block header to a third party called pool manager instead of publishing it directly to Witnesses. Another difference is that Educator should also send a separate signature for her ATG delta (if it’s not empty). - A pool manager collects block headers from Educators until total number of transactions in all Educator’s blocks is more than some threshold \( K_{min} \). - Then pool manager builds a sized Merkle tree over the list of received Educator’s block headers, forming a second-level block (Fig. 8). The header of second-level block gets published on the public blockchain. Instead of containing a single ATG delta, the header of this second-level block contains a list of separate signed ATG deltas of small educators. We assume that this approach would not create a problem of oversized block headers because Educators don’t typically create and close courses very often, and an average number of ATG deltas in every single block header will stay small. \(^1\)The particular keys \( pk_A \) and \( sk_A \) belonging to the party \( A \) are generally deducible from the context After constructing a second-level block, pool manager sends each of the small Educators a path to their block headers in a second-level sized Merkle tree. Having this path, each Educator can construct a publicly verifiable proof for any transaction in her private block by simply concatenating this path with a path to transaction in a first-level Merkle tree. For every processed block header small educator pays pool manager a fee calculated by this formula: \[ C_{pool}(B) = \alpha_{pool} + \beta_{pub} \cdot N_{tr}(B) \] (12) where \( \beta_{pub} \) is a network price coefficient from 6, and \( \alpha_{pool} \) is a constant fee set by the pool manager. If a pool manager sets such \( \alpha_{pool} \) that \( \alpha_{pool} < \alpha_{pub} \), but \( \alpha_{pool}N > \alpha_{pub} \), then for every published second-level block a pool manager gains \( \alpha_{pool}N - \alpha_{pub} \) coins, while every Educator in pool pays less for the block header publishing then if published directly to Witnesses. Therefore, every participant has an incentive to remain in the pool. Note that a small Educator participating in the pool should slightly change the structure of his own chain. Every block header should contain a hash of previous one (as described in Section 3.4), but blocks of an Educator participating in the pool are published only as part of second-level pooled block. So the educator’s block header should contain a header hash of a second-level block containing previous first-level block in educator’s chain instead of header hash of previous first-level block. We don’t yet provide a fully trustless solution for pooling. Small Educators should trust a pool manager to provide correct second-level proofs of their blocks, and in theory an adversarial pool manager may cheat on Educators by not including their block headers in a published second-level block after she received the money. However, we assume that deceiving the small educators and losing them as a long-term clients is far less profitable than being honest. Nevertheless, we are currently working on a fully trustless pooling protocol based on a smart contract which ensures that a pool manager cannot deceive any Educator. A.3 Smart-contract implementation A.3.1 Accounts There are 2 kinds of accounts: - Personal: created for each client, directly belongs to that client; • Robot: created by client, doesn’t belong directly to that client (and anyone else); represents smart contract. Robot account should contain, aside from token mapping: • Data storage for smart contract state; • The code to control the account, compiled to Plutus Core language. The Plutus Core language allows declaring a module with exported (public) methods. These methods will be the public API for the account. One can evaluate the Plutus Core code itself (not just call public methods) if account directly belongs to her. Personal accounts don’t have any persistent associated code to control them. A.3.2 Scripting The Plutus language will be used to program Robot nodes. Any interaction with account is done via Plutus code. The evaluation cost is collected and summed into Fee. “Sandbox run” can be performed, to check costs and programming errors. We will call natively-implemented functions to be called from Plutus Core “NIFs”. Each atomic step of execution has an invocation cost. A.3.3 NIFs NIF is an action to be invoked on Account. NIF is the only source of changes in Account. Any operation on the Account to be implemented should be represented as a call to NIF OR as a sequence of NIF-calls. This will allow us to reason about security/validity of operations with more confidence and limit the access and scope of each operation. A.3.4 Transaction We will cover “simple money transfer” and “data disclosure” transactions in this section. “Simple money transfer” transactions will be implemented as transactions with empty “function call” field. Transferral transaction must contain: • Sender signature; • Receiver signature; • Amount and kind of funds transferred (must be non-zero overall); • “Nonce” to prevent double-invocation. • (Optional) textual message for the receiver; • (Optional) function call to be invoked upon transaction approval. • Digest of all other fields, encrypted with Sender private key. Transaction is the only way for accounts to interact. Function call (if present) will contain the name of exported function and arguments to be supplied upon invocation. The function would be invoked like that: function-name(Sender-signature, amount, value1, value2, ..., valueN) On successful code invocation, money will be transmitted to the Target account and the costs will be demanded from the Sender. If the code fails, the transaction is not published as successful and is rejected. If there is not enough money supplied for the operation or the code raised an error, whole transaction will fail. If there is no function call in a transaction, the code invocation is assumed successful. The Gas fee for transaction approval will be calculated as sum of costs for atomic actions performed. A.3.5 Implementation of the data disclosure protocol using Robot account We assume that we have 2 sides: - Buyer - Seller. “Gas” below is the estimation of the operation cost. The name and the idea are taken from Ethereum. “Fee” is a forfeit to either side trying to deceive the opponent. “Sum” is the price of the data to be sold. Robot would work as follows. The Buyer invokes a transaction which runs code directly on his account, that constructs a robot with the following exported methods: - `start-trade();` - `accept-encrypted(encrypted-mroot, size);` - `send-encrypted-secret(encrypted-secret);` - `handshake();` - `reject(sk, block, proof-enc);` - `cancel();` carrying `Sum + Fee + Gas` amount of currency and some `predicate` to check the data if applicable. This robot is initialized with the following data: - Cost in tokens; - Size of the encrypted data; - Merkle tree root of encrypted data; - Timeout, after which the account is destroyed; - Sink, a person whom the remaining money will be send on timeout; - Destructor, a procedure to invoke on timeout. Here is the state machine of that Robot-account: (0) Account has just been created. "start-trade" from Seller WHEN transferred amount = Fee Notify Buyer AND GOTO 1 (1) The trade is started. "accept-encrypted(encrypted-mroot, size)" from Buyer IF encrypted Merkle root and size are the same as initially set in Robot state, GOTO 2 ELSE GOTO 8 Timeout! GOTO 8 (2) Data was transferred. Seller encrypts a session key with the Buyer’s public key. "send-encrypted-secret(encrypted-secret)" from Seller Notify Buyer (3) Off the band, the symmetric key is sent. "reject(sk, block, encrypted-proof)" from Buyer "handshake" from Buyer GOTO 5 Timeout! GOTO 6 (4) Arbitration. The robot performs check. If it finds that Buyer is right (block invalidates proofs OR the predicate fails), GOTO 7 else if Seller is right GOTO 6 (5) Cleanup I. (Sum + Fee) is sent to Seller. GOTO 7 (6) Cleanup II. All remaining money is sent to Seller. The account is closed. (7) Cleanup III. All remaining money is sent to Buyer. The account is closed. (8) Cleanup VI. Fee is sent to Seller. GOTO 7 A.3.6 Example Lets assume there are: - Seller, which has declared that he has his students’ Linear Algebra marks for Nov, 2018 worth 500 tokens (signed in some private Merkle tree); - Buyer, which has 600 tokens available. - Robot, which is a smart account created by the Buyer. We will consider three cases: - Seller sends nothing at all; - Seller tried to send Buyer encrypted garbage instead of data; - Buyer tried to blame Seller in giving her invalid data with data being completely valid (in terms of proofs and predicate). All trades will have same initial part, so we will branch when necessary. The trades will go as follows: - The Buyer formulates a predicate to check that data corresponds its description OR uses universal truth as predicate. - The Buyer requests the data from the Seller. - The Seller notifies the Buyer of the data price (500 tokens). - The Buyer creates a Robot using the scheme above with 500 tokens and the predicate. • The Seller accepts the trade and sends `start-trade()` to the Robot along with 20 tokens Fee. 1. Seller sends badly sized data or the Merkle root of the encrypted data does not match: – The size or the Merkle root of the transferred data is invalid. – The trade is stalled until time is out. – On timeout the trade is reverted and money is returned to Buyer. 2. Seller tries to send garbage or one of blocks decrypts to garbage: – Buyer finds that at least one block is invalid in either form. – She invokes `reject(sk, block, encrypted-proof)` to start arbitration. – Robot checks that block falsifies proofs and that Buyer was right. – Robot returns all remaining funds to Buyer. 3. Buyer tries to blame Seller with valid data: – Buyer selects the block to call “invalid”. – Then she invokes `reject(unencrypted-block, proof)` to start arbitration. – Robot performs `check-block-and-proof(block, proof)` and finds that Buyer was not right. – Then it sends all 520 tokens to Seller. 4. Buyer receives the data and the key, but remains silent: – If the timeout has expired, 520 tokens are sent to Seller.
{"Source-Url": "https://icorating.com/upload/yellowpaper/gFRVewaaWp2qGKE3XI0TlOOYIvZitefOrvSUX2GI.pdf", "len_cl100k_base": 9933, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 43374, "total-output-tokens": 11215, "length": "2e13", "weborganizer": {"__label__adult": 0.0007753372192382812, "__label__art_design": 0.0014085769653320312, "__label__crime_law": 0.0013856887817382812, "__label__education_jobs": 0.1129150390625, "__label__entertainment": 0.00037741661071777344, "__label__fashion_beauty": 0.0005660057067871094, "__label__finance_business": 0.01230621337890625, "__label__food_dining": 0.00106048583984375, "__label__games": 0.003612518310546875, "__label__hardware": 0.002635955810546875, "__label__health": 0.0016298294067382812, "__label__history": 0.0013217926025390625, "__label__home_hobbies": 0.000545501708984375, "__label__industrial": 0.0011997222900390625, "__label__literature": 0.0010824203491210938, "__label__politics": 0.0015106201171875, "__label__religion": 0.0008802413940429688, "__label__science_tech": 0.403564453125, "__label__social_life": 0.0005955696105957031, "__label__software": 0.0301055908203125, "__label__software_dev": 0.41796875, "__label__sports_fitness": 0.0006814002990722656, "__label__transportation": 0.0014362335205078125, "__label__travel": 0.0005135536193847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43649, 0.02176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43649, 0.63809]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43649, 0.90981]], "google_gemma-3-12b-it_contains_pii": [[0, 3672, false], [3672, 6427, null], [6427, 10273, null], [10273, 12596, null], [12596, 14520, null], [14520, 16366, null], [16366, 19116, null], [19116, 21313, null], [21313, 24639, null], [24639, 27458, null], [27458, 30846, null], [30846, 34246, null], [34246, 36618, null], [36618, 39390, null], [39390, 41077, null], [41077, 42503, null], [42503, 43649, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3672, true], [3672, 6427, null], [6427, 10273, null], [10273, 12596, null], [12596, 14520, null], [14520, 16366, null], [16366, 19116, null], [19116, 21313, null], [21313, 24639, null], [24639, 27458, null], [27458, 30846, null], [30846, 34246, null], [34246, 36618, null], [36618, 39390, null], [39390, 41077, null], [41077, 42503, null], [42503, 43649, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43649, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43649, null]], "pdf_page_numbers": [[0, 3672, 1], [3672, 6427, 2], [6427, 10273, 3], [10273, 12596, 4], [12596, 14520, 5], [14520, 16366, 6], [16366, 19116, 7], [19116, 21313, 8], [21313, 24639, 9], [24639, 27458, 10], [27458, 30846, 11], [30846, 34246, 12], [34246, 36618, 13], [36618, 39390, 14], [39390, 41077, 15], [41077, 42503, 16], [42503, 43649, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43649, 0.06522]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
982b37bd51ee22af3edde3e1aa4b1f5a94d97321
The Community Solid Server: Supporting Research & Development in an Evolving Ecosystem Joachim Van Herwegen* and Ruben Verborgh Ghent University – imec – IDLab, Department of Electronics and Information Systems E-mails: joachim.vanherwegen@ugent.be, ruben.verborgh@ugent.be Abstract. The Solid project aims to empower people with control over their own data through the separation of data, identity, and applications. The goal is an environment with clear interoperability between all servers and clients that adhere to the specification. Solid is a standards-driven way to extend the Linked Data vision from public to private data, and everything in between. Multiple implementations of the Solid Protocol exist, but due to the evolving nature of the ecosystem, there is a strong need for an implementation that enables qualitative and quantitative research into new features and allows developers to quickly set up varying development environments. To meet these demands, we created the Community Solid Server, a modular server that can be configured to suit the needs of researchers and developers. In this article, we provide an overview of the server architecture and how it is positioned within the Solid ecosystem. The server supports many orthogonal feature combinations on axes such as authorization, authentication, and data storage. The Community Solid Server comes with several predefined configurations that allow researchers and developers to quickly set up servers with different content and backends, and can easily be modified to change many of its features. The server will help evolve the specification, and support further research into Solid and its possibilities. Keywords: Solid, RDF, Linked Data, Semantic Web 1. Introduction Data plays an important role in multiple aspects of our lives: from the data our government manages about us, to the items we store and share on the internet, to even everything about our online behaviour that is being tracked online. Companies use that data to predict our future behaviour and gain a competitive edge in their respective industries. This poses a considerable challenge for newcomers trying to establish themselves in a particular sector, as established companies have already amassed extensive data, creating a substantial barrier to outperforming them. As companies wield considerable power over vast troves of data, individuals find themselves with minimal influence over the fate of their personal information. Due to companies’ control over such large data piles, individuals have very little influence over what happens to their own data. Fortunately, recent legislative changes signal a positive shift, indicating that companies will eventually need to acknowledge their inability to exert complete control over all data. Once people regain control over their own data, they can decide with whom they wish to share it, thereby motivating companies to offer services of a high enough quality to earn that privilege. This dynamic would *Corresponding author. E-mail: joachim.vanherwegen@ugent.be. 1570-0844/$35.00 © 2024 – IOS Press. All rights reserved. foster a mutually beneficial relationship, as end-users would experience tangible benefits from data sharing, ultimately resulting in a more valuable and sustainable data exchange. Historically, Semantic Web research has focused on the exchange of Open Linked Data, with little consideration for data where access might be restricted by policies, as is the case with personal data. These restrictions drastically change the infrastructure and processes, such as publication and query processing. Nonetheless, Semantic Web technologies can play a crucial role in placing data closer to people, because they can give universal and connected semantics to data that is managed in a highly decentralized way. Solid [1] is an ecosystem that rises to the challenge of tackling the private–public data spectrum. It does so through the way of data decentralisation, where the main idea is that everyone using Solid has one or more data pods containing their personal data. By setting relevant access policies, users can specify who can read or write parts of their data. Solid clients that know how to interact with such a server can then be used to access that data. In an open ecosystem such as Solid, any party can implement a server, as long as they abide by the Solid Protocol specification [2]. Similarly, for the same reason anyone can also make a client to communicate with such a server. As a consequence—and this is a core part of Solid—any client can interact with the data created by any other client, on any Solid server. Since the data is stored in a user’s data pod and not in a specific client, clients should be seen as views and controls over that data. For example, one application could be used to set a person’s date of birth, which could then be used in a completely different one as a birthday reminder. There are many invested parties in the Solid ecosystem: companies addressing real-world use cases, researchers want to evolve the specifications to suit the necessary demands, developers want to create clients and servers to extend the reach of Solid, and new users just want to try it out. In this paper we introduce the Community Solid Server (CSS) as open-source software [3] as a tool to support research and development of current and future Solid specifications. In Section 2 we discuss related work. Section 3 covers specific use cases we want to solve with the server, which are generalized into requirements in Section 4. Section 5 and Section 6 explain how those requirements are fulfilled, the former through the software architecture and the latter through the configuration the server allows. Section 7 gives an overview of how the server is currently already being used the impact it has, and how it solves the originally proposed use cases. Finally, we conclude in Section 8 where we also look towards the future of the server. 2. Related Work 2.1. Basic Solid interaction Before diving deeper into the specifications that define the client–server contract, we start with an overview of what happens during a request to such a server, in order to describe the high-level interaction. 2.1.1. Prerequisites Combining Linked Data with authentication and authorization is an ongoing research topic with different potential solutions. One suggested solution is through the usage of WebIDs [4], which are Linked Data resources that uniquely identify a person or any other kind of agent. In the example below, we assume the user already registered their WebID with a Solid Identity Provider (IDP), which is able to prove that they are the owner of that WebID. The server is free to implement its own identity verification mechanism (email/password combination, a specific kind of token…). 2.1.2. Performing an HTTP request to a Solid server When a client wants to access data on a Solid pod on behalf of a user, the following steps are performed: 1. The client asks the user to authenticate with their IDP, and receives the necessary authentication data in turn. 2. The client uses that authentication data to generate HTTP headers which prove the identity of the user to the Solid pod containing the data. 3. An HTTP request for a read/write/update/delete operation on a specific resource is sent to the Solid pod. 4. The Solid pod contacts the IDP to determine the validity of the authentication headers. 5. If valid, the server uses those headers to determine the user’s credentials (such as the WebID) and their client. 6. The server determines which permissions are available for the given credentials on the targeted resource, such as read, write, create, delete, etc. 7. The server determines if the requested operation can be performed with the available permissions. 8. If allowed, the server performs the operation and returns the result to the client. ### 2.2. Solid Specification Documents The Solid ecosystem consists of a collection of specifications that clients and servers are required to adhere to. The interactions outlined above are captured in these specifications, of which the CSS is one implementation. #### 2.2.1. History and current status The initial version of Solid was developed in tandem with prototype implementations such as the Node Solid Server (NSS) [5]. While the specification and other implementations were still in development, the definition of a “Solid server” was defined by the behaviour of NSS. This behavior was first documented as notes and eventually as specifications in a W3C Community Group. At the time of writing, the transition to a W3C Working Group is being undertaken, which is able to create W3C Recommendations that serve as recognized standards. However, the Solid specifications are still evolving, with both changes to existing documents as well as additional documents being suggested as part of a multi-phase process. Hence, there is a need for an implementation of the specifications that can be used to implement and test changes to the specifications, and to explore and prototype desired future behavior of Solid implementations. #### 2.2.2. Solid Protocol The Solid Protocol specification [2] is the main entry point into the collection of specification documents that define Solid. As the main entry point, it defines which other specifications are required to be fulfilled for a server to be Solid-compliant, which we will cover in the following subsections. In particular, it contains an adaptation of the Linked Data Platform (LDP) specification [6]. On top of the existing HTTP methods (GET to read data, POST to create new resources, PUT to write data, PATCH to modify, and DELETE to remove), it defines specific semantics for patching RDF documents, and for interacting with containers of resources. Containers are resources that group other resources together by providing RDF descriptions with containment triples. #### 2.2.3. Authentication The Solid-OIDC specification [7] defines everything related to authenticating with Solid. It expands upon the OAuth 2.0 [8] and OpenID Connect Core 1.0 [9] standards and defines how clients can identify by requesting specific tokens from a server. It also defines how servers can provide these tokens and how they should verify their authenticity. To conform to the Solid Protocol specification, a server of Solid pods needs to be able to accept requests containing these tokens and verify them. An OpenID Provider, on the other hand, is a server where clients can register to generate such tokens. To verify correctness of a token, the Solid server communicates with the OpenID Provider. #### 2.2.4. Authorization A Solid server needs to be able to restrict access to private data. The specification defines two possible access control systems that can be used to do this: Web Access Control (WAC) [10] or Access Control Policy (ACP) [11]. At a high level these are quite similar systems: users can add system-specific resources to the server, indicating the credentials that are required to perform certain actions on their data. They can be used to, for example, provide public read access to certain resources so everyone can see them, or allow everyone to create new resources in a specific container as a way to allow people to leave comments or other communication, while still only allowing the data pod owner to edit the data. The protocols differ in how policies are inherited and how clients are identified. #### 2.2.5. Notifications The Notification specification [12] clarifies how users can register to specific resources, after which the server will inform them of any changes. At the time of writing, the specification only clarifies how clients can register and the data models used during the communication process, but not which kind of messages need to be sent out. It also specifies many different methods a server is allowed to use to send out those notifications, such as WebSockets or Webhooks. 2.3. Existing implementations Several implementations of the specification exist, both on the server side and on the client side; we provide a non-exhaustive list of server implementations. Non-commercial implementations include projects such as the open-source Node Solid Server (NSS) [5]. NSS resulted from prototyping efforts during early phases of the Solid Protocol, and is used for development and testing. Its implementation is currently maintained by volunteers. Architected from a prototyping perspective back when the Solid Protocol was still forming, the NSS codebase is no longer well-suited to follow the current evolution of the specifications without a substantial rework and repurposing. Commercial implementations include the Enterprise Solid Server (ESS) [13] and TrinPod [14], both closed source at the time of writing. The ESS is designed for storing and managing sensitive personal data, and is used in production cases for these purposes. TrinPod positions itself for Digital Twin use cases using the Solid specifications. 3. Use cases In this section we will cover several use cases that give an overview of several use cases we wanted to support by creating a new server. After giving an overview of the server, we will return to these in Section 7.2 where we indicate how the server supports each use case. 3.1. Benchmarking the impact of authorization to inform the specification A protocol researcher aims to benchmark the differences between WAC and ACP for an HTTPS client. These are different authorization schemes a Solid server can have, and they want to find out how they impact a request. To this end, the researcher would need at least two Solid server which are identical in every regard, except that one supports WAC, while the other supports ACP. Preferably they would also have a Solid server without authorization as a baseline. This would allow them to accurately measure the impact of either authorization scheme, which can be used to inform future spec changes. 3.2. Performing user experience research on the onboarding experience A societal researcher wants to compare different welcome experiences to a Solid server, specifically the sign-up experience and the initial layout and contents of a pod. This way they can determine what might be needed to improve the experience with Solid for now users. For this, they want to have a server where they can easily replace the contents that get provided to new users, without having to write any code. 3.3. Supporting new operations The behavior of PATCH is still under discussion within the Solid W3C Community Group. The first proposed PATCH format relies on SPARQL Update, which has the benefit of being an existing W3C standard, but lacks a semaphore mechanism. An alternative with N3 Patch defines a semaphore mechanism, but relies on the N3 language that is currently not a standard. Now a researcher wants to propose different PATCH formats along with an implementation, but without having to implement a full Solid server themselves. 3.4. Supporting the adaptation of research findings Ongoing research looks at different aspects of the Solid Protocol and its implications on domains such as data management and security. Occasionally, findings from such research result in a necessity for changes or extensions to the specifications in order to alleviate the discovered concerns. As a concrete example, recent research indicated a tension between the granularity of document organisation and the granularity of the authorization system [15]. Their conclusion is that the same data needs to be exposed in different documents with different permissions. 4. Requirements Out of the specific use cases in Section 3, we distilled several generic requirements that guide the design, architecture, and implementation of CSS. The main goal is to explicitly focus on the needs of two groups: researchers and developers. 4.1. Testable specification compliance It stands to reason that the most important requirement for the server is that it is fully spec compliant. While that is an implementation objective, it is also necessary that we can verify and prove that this is the case. 4.2. Evolve along with the specification Solid is a combination of still evolving specification documents. It is imperative that the server can keep up with these changes; an outdated server could damage the ecosystem by sowing confusion about the correct behavior. Researchers and developers are at opposite ends here: researchers aim to inform the evolution of specifications, while developers prefer a more constant experience, yet want to be able to test their applications against different versions of the specification. Both sides require a server that evolves along to achieve their goals. Since the server is a combination of several different specifications, the architecture needs to be designed in such a way that changes in one specification do not break a requirement of a different one: independence of all layers is important. 4.3. Support multiple server roles Multiple servers are involved in a Solid interaction: the pod server handles the core Solid protocol, and the OpenID Provider provides OIDC authentication. We need to provide a solution that covers all the necessary roles involved, thereby providing an out-of-the-box Solid experience. Having an all-in-one system allows anyone to get started as easily as possible; modularity allows different kinds of instances from the same codebase. This allows any part of the set of Solid specifications to be investigated and experimented with, not just a subset of it, whether for experimenting with new features or investigating how to make an application work with the different components. 4.4. Modularity and configurability When conforming to the Solid specifications there is still room for variability, such as which authorization system to use, or even how the data is stored in the backend. Configuring changes like this in the server should be as easy as flicking a switch to go from one option to another. For researchers it is important to be able to compare different variations so they can investigate the impact of certain changes. For example, having either WAC or ACP as an authorization system might have a major influence on the performance of the server, which might cause changes in the specification to bring those more in line with each other. To study such differences, they need to be able to set up server instances with different feature sets. Developers want different server versions to make life easier for them when doing their work. Specifically, they need ways to simulate certain server situations to see how it reacts. E.g., force the authentication to extract specific credentials to simulate different users, disable authorization to focus on data management, cause the server to have faulty data for exception handling, etc. 4.5. Allow extensions with new features One part of doing research on Solid is designing new features based on emerging needs, with the ultimate aim of producing new specifications for uniform behavior. For example, currently the specification defines WAC and ACP, but one might want to investigate a new authorization scheme or way of enforcing policies [16]. Extensions could also replace existing parts of the server. Someone might want the data to be stored in a new type of backend for example, or provide a new implementation of a feature that is highly optimised for certain scenarios. 4.6. Quick setup and teardown A Solid server is not a simple piece of software. Generally there are additional steps that need to be taken before it can be started. These include configuration, starting external services, etc. Similarly, shutting down the server and resetting the system so a clean restart is possible can also take multiple steps. If we want the server to be used for rapid experimentation, it is important that there is as little overhead as possible. Researchers might want to quickly set up and switch between different kinds of servers to run their experimentation; for developers, this enables the server to be used within their test frameworks, automating the testing against a Solid server. For newcomers, it lowers the barrier of entry for getting started with Solid: the faster someone can go from reading about Solid to setting up a server, the better the introductory experience. 4.7. Error handling and logging Many steps happen during a Solid interaction, and when something goes wrong in a decentralised system, it is not always straightforward to determine which component is at fault. Therefore it is necessary that the server has extensive error handling and logging. Researchers can use this to detect potential issues with specific interactions they might not have considered. Developers trying to debug or troubleshoot specific applications can receive better feedback this way. 5. Architecture Based on the requirements in the previous section, we now introduce the architecture of the Community Solid Server as an open-source implementation of the Solid specifications, tailored towards research and development. For specification conformance, CSS needs to not only provide the HTTP request handling for data interactions, but also to authenticate clients, authorise requests, send out notifications, etc. It also acts as an OpenID provider and handles user account management in that context. All of these features are orthogonal to each other and depend on the different specifications described in Section 2.2. Figure 1 shows how these orthogonal features interact with each other through the steps of a request to a Solid server. Each of the displayed components (except for the Client initiating the request), represents a part of the CSS architecture. 5.1. Main components The different roles the server supports are independent of each other in the architecture. They might use some of the same utility classes and store data in the same way, but besides that, changes for one major component will have no bearing on one of the other ones. This allows us to limit the impact of evolving specifications as mentioned in the requirements. As CSS is a server that handles HTTP requests, each interaction is initiated by an incoming request, and terminated with an outgoing (data or error) response. After an initial routing step, the modus operandi is the same for every request: each component iteratively chips away at a single at a small part of the request to facilitate the next step in the process, until all these small steps result into the output object that is then serialized towards the client. The first step determines which major component should handle the request, based on the request type: 5.2. Handling a Solid Protocol operation When handling a request targeting a Solid resource, 4 core steps need to happen sequentially: 1. Parsing the request. 2. Extracting and validating the credentials of the client. 3. Verifying the authorization of the request. 4. Performing the operation described in the request. The result of these steps will then be used to generate the HTTP response that is send back to the client. 5.2.1. Request parsing In this step, the raw input is normalized and validated, so later classes do not have to worry about edge cases arising from permitted differences in description. The full of the request is reconstructed, as this is the identifier of the resource that is being targeted. The Accept headers are parsed into a preferences object to be used for content negotiation. The headers related to conditional request, such as If-Match, are all combined into a single conditions object that can later be used for validating the request. All other relevant headers are combined into a single RDF metadata object. Finally, the body is processed in case of PUT, POST, or PATCH requests. For PUT and POST, this involves verifying the relevant HTTP headers and wrapping the data stream to prevent unexpected asynchronous errors. In the case of PATCH, the stream is immediately fully parsed into a Patch object containing all requested changes, as this determines the exact kind of operations involved and hence the required permissions. 5.2.2. Authentication While authentication involves many separate steps, CSS outsources most of this to an external library [17] of existing specifications that are not exclusive to Solid. The result of executing this step is an object containing all the identifying information of the client, including, most importantly, the WebID. Once both of sets have been determined, they are compared to each other. If any of the required access modes is not found in the set of available modes, access to the resource will be forbidden and the request will be terminated with an appropriate 401 or 403 status code. 5.2.3. Authorization The authorization step determines if the agent identified by the credentials in the incoming request has the necessary permissions to perform the operation expressed therein. During the authorization step, we determine which internal access modes are required to fulfill the operation. For example, a GET request requires Read, a PUT requires Write, and a POST creating a new resource requires Write and Create. For PATCH, we inspect the parsed request body. For each resource, CSS can determine which of those required access modes are actually available for the given agent. These can be specified using an access control system such as WAC or ACP. Each such system comes with its own implementations to parse the access controls and convert them to the CSS internal access modes. For example, WAC does not have a native Create mode; instead, this behavior is inherited through Append permissions on a resource’s parent container. 5.2.4. Operation After authorization succeeds, the request has to be handled according to rules defined in the Solid Protocol specification. There is a specific class for each HTTP method, which calls the correct operation responsible for data management. These operations are captures in the ResourceStore interface, which contains key functions for performing CRUD operations on resources. To support everything needed in the backend, we use multiple ResourceStores, each with their own specific function, which are then chained together following the Decorator pattern to represent a single store. The first store uses a read/write locking mechanism to make sure it is not possible to perform simultaneous operations that would result in data conflicts. PATCH is the more peculiar method of those defined in the Solid Protocol. Whereas other HTTP methods correspond to a single function, the behavior of PATCH is defined entirely by the request body. To prevent the need for backend-specific PATCH operations, the CSS architecture includes a ResourceStore that performs PATCH operations as a set of elementary resource manipulations, independent of how data is stored. Depending on the chosen PATCH body (currently N3 Patch and SPARQL Update are supported), a different algorithm is applied to an in-memory RDF document, which is only persisted in case of success. Content negotiation The modular CSS architecture relies on content negotiation, not just for end-to-end reformatting for clients based on Accept headers, but also for internal use. For instance, during the aforementioned handling of PATCH, the CSS uses internal content negotiation to correspond an RDF serialization into triples and back. All of this happens in a single ResourceStore that checks the preferences of requests and converts the response data to match what is preferred. To perform any kind of data conversion, we make use of a set of very narrowly focused conversion classes. All of these have a set of specific media types they can parse, and similarly a set it can convert to. For example, there is a converter that accepts various RDF serializations and outputs memory-native Quad objects, while another converter specifically converts Markdown to HTML. Exceptions and errors also internally pass through this system, such that detailed error reports can be relayed correctly to clients in different serialization formats. A pathfinding algorithm chains multiple converters together as necessary to create a path starting from the resource media type as found in storage to the preferred type requested by the client, streaming data rather than materializing whenever possible. This allows for new conversion paths to be supported through a single new converter, rather than needing to implement all variations. For example, CSS requires no dedicated converter from Turtle to JSON-LD, because there are converters from Turtle to Quads, and from Quads to JSON-LD. Chaining those together produces the desired result. **Performing the requested operation** Depending on the specific HTTP method and the type of the target resource, several checks and steps are selected. For example, POST only works when targeting a container. Similar to the idea behind the ResourceStore that handles PATCH operations, many of the steps here are independent of how the data is actually stored. The final ResourceStore implements this general behavior. Backend-specific implementations are hidden behind a more elementary DataAccessor interface that can be used to read and write data. There then is a DataAccessor implementation for storing data in memory, on a file system, etc. ### 6. Configuration While we use the term “Solid server” throughout this paper, this can be a misnomer, as it might give the impression that a server is one opaque monolith. As indicated in Section 5, there are instead many components that play a role in a Solid operation. While these can all be realized on the same server, this is no such requirement; they could be split up over different servers with their own responsibility. Even then, there is still significant wiggle room as to how a server fulfils one or more of these roles. One of the core parts of CSS is that these roles can be configured differently depending on the needs of the researcher or developer. For example, one might want to change the authorization system used by the server, or perhaps they want CSS to only support some of the necessary roles. In case someone already has their own Solid IDP setup, they might want to disable that part of CSS and have their server only handle the data operations. All of the CSS classes focus on solving a specific problem, and there are no classes that instantiate these and link them all together. Instead, we make use of Dependency Injection with the Components.js [18] framework. Components.js is non-invasive as all of its configuration happens outside of the source code, in external configuration files that describe how classes are linked to each other and which parameters they require. These descriptions are RDF, usually JSON-LD, which means that configuration files are valid RDF serializations. It thereby provides the flexibility that is necessary to compose a server as the user wants, at the cost of increased complexity. In Components.js, TypeScript class instantiations correspond to RDF class instantiations. Every subject is a class, with its type corresponding to a TypeScript implementation. Other predicates are used to define its constructor parameters and can, since it is RDF, link to other objects. To then change which components are used in a server instance, only the Components.js configuration has to be changed; the actual TypeScript code does not need to be touched. To help users get started, the server comes with several pre-defined configurations, covering a range of possible feature combinations. These already cover several of the more expected server setups, with some variations in data storage methods or authorization systems for example. An example of such a config is found in listing 1. The preset configurations are made by clustering related components together in partial configuration files, and importing them in the main entry point. Features can then easily be chosen by changing what is imported. For example, to swap between WAC and ACP as authorization system on the server, the WAC configuration import would have to be replaced by the ACP import. For the example shown in listing 1 that would be line 19. There are plenty of choices that can be made by using the imports like that, such as how data is stored, whether users can register, HTTP vs HTTPS for connections, etc. { "@context": "https://community-server/^7.0.0/components/context.jsonld", "import": [ "css:config/app/init/initialize-intro.json", "css:config/app/main/default.json", "css:config/app/variables/default.json", "css:config/http/handler/default.json", "css:config/http/middleware/default.json", "css:config/http/notifications/all.json", "css:config/http/server-factory/http.json", "css:config/http/static/default.json", "css:config/identity/access/public.json", "css:config/identity/email/default.json", "css:config/identity/handler/default.json", "css:config/identity/oidc/default.json", "css:config/identity/ownership/token.json", "css:config/identity/pod/static.json", "css:config/ldp/authentication/dpop-bearer.json", "css:config/ldp/authorization/webacl.json", "css:config/ldp/handler/default.json", "css:config/ldp/metadata-parser/default.json", "css:config/ldp/metadata-writer/default.json", "css:config/ldp/modes/default.json", "css:config/storage/backend/memory.json", "css:config/storage/key-value/resource-store.json", "css:config/storage/location/root.json", "css:config/storage/middleware/default.json", "css:config/util/auxiliary/acl.json", "css:config/util/identifiers/suffix.json", "css:config/util/index/default.json", "css:config/util/logging/winston.json", "css:config/util/representation-conversion/default.json", "css:config/util/resource-locker/memory.json", "css:config/util/variables/default.json" ] } Listing 1: A Components.js configuration of a server instance, found at https://github.com/CommunitySolidServer/CommunitySolidServer/blob/v7.0.0/config/default.json 7. Sustainability, Usage & Impact All code of the Community Solid Server is open source under the MIT license. At the time of writing, the repository [3] has been starred 473 times, forked 117 times, and is used in 149 other repositories. 46 people have contributed to the repository. 7.1. Sustainability During the development of the server, we have always focused on making sure the code base remained of high quality. One aspect of this is requiring the unit tests to always have 100% code coverage on all code in the project. While this is not immediately an indication of everything working as intended, it does make sure that a developer checks that new classes output data as expected. Besides the unit tests there are also extensive integration tests, setting up complete instances of the server and verifying these instances conform to all the necessary specifications. The Conformance Test Harness (CTH) [19] is a server-independent test framework for Solid servers. It runs a test suite to verify if a specific server fulfills the specification requirements. Besides our own internal tests, we also... run the CTH as a form of external audit on the server functionality, to prove that the server fully conforms to the specifications. This test harness is also an example of the community impact of creating this server. During the creation of this test harness, we provided feedback on certain tests being too strict or incorrect, while other tests showed us where the server implementation was wrong. This bidirectional approach caused both systems to improve. 7.2. Use case impact Having covered how the server works, we will now discuss how this solves the use cases described in Section 3. 7.2.1. Benchmarking the impact of authorization to inform the specification A researcher aimed to compare WAC and ACP with a baseline. This means they need a server that loads the WAC component, one that loads the ACP component, and one that loads neither. As we have seen in Section 6, they can easily change which components are used by changing the imports in their configuration file, like the one in listing 1. Configuring HTTPS involves passing the certificate, which can either be set there in that configuration, or passed as command-line arguments. This allows the same configuration to be reused with different certificates. In conclusion they end up with three different configurations, which can all be used to independently set up the servers they need. Specifically the server without authorization can also be useful for developers not wanting to be bothered with restrictions during development. 7.2.2. Performing user experience research on the onboarding experience The societal researcher wants to customize the welcome experience of a Solid server. Which HTML files to use, and what the template of a new pod is, is defined in the CSS configuration as part of a JSON-LD file. Components.js allows specific parts of a configuration to be replaced, so this can be used to replace the HTML parts wherever necessary. Similarly, the pod contents are determined based on templates, which can also easily be updated. Again developers can also make use of these ideas, as they can quickly set up pods with specific contents, without having to perform the initialization themselves. 7.2.3. Supporting new operations We want to support a new PATCH format. How a PATCH is resolved, and which components are necessary, was covered in Section 5.2.4. Due to the nature of Components.js the researcher can develop the new components in a separate repository independent of the CSS core. Afterwards these can be linked together with the already existing configurations. This allows the new PATCH algorithm to be shared as a separate piece of code with the W3C group. Due to its independence it can easily be tested with different authorization frameworks, such as ACP and WAC, which is an additional bonus as PATCH has specific interpretations in authorization. 7.2.4. Supporting the adaptation of research findings Recent research exposed that exposing the same data through different resources would solve several problems[15]. Even adding support for something like that in CSS is possible by creating new components. As the proof is in the pudding, we implemented a new component that supports this idea[20]. This new component allows users to define so-called derived resources, which are generated by determining a set of resources and a query to perform upon them. As with other components, this can be combined with existing configurations to still provide the full flexibility that is possible with a CSS configuration. 7.3. Community impact Since people started using the server, several of them have fed back input to help improve CSS. There have been several pull requests by community members to extend the server or fix specific issues. At the time of writing there have been 44 contributors to the repository. Several external components have also been developed which either make use of the server, or create a new component that extends the server functionality, including: - integration with calendar management systems [21] – Internet of Things integrations [22] – Data-Kitchen, a desktop app combining local files and Solid pods [23] – Solid-Redis, a component for the server to use Redis as data storage [24] – input validation using shape files [25] – publishing event streams via Solid [26] Another goal we wanted to achieve was to support client developers that need a server to test their client against during development. Below are some applications that use the server specifically for this purpose: – viewing and manipulating personal data in your Solid pod [27] – recipe manager to collect all your recipes [28] 7.4. Supporting people who use the server Due to the server aiming to support many different scenarios, it can be overwhelming for new users to know how and where to start. To help users there, we have created extensive documentation, tutorials, and tools explaining and helping with different parts of the server. The documentation [29] of the server is the best place to start as it links to all other resources available. The documentation itself covers several server-specific features and how to use them, such as automating pod creation on server startup for testing purposes. Besides the user documentation there is also an overview of core parts of the architecture. The components discussed in Section 5 are explored more deeply there. Finally there are the full technical definitions of all the classes the package exports, which can be used by projects aiming to extend the server. We created several extensive tutorials which guide the user along to solve specific problems. One tutorial [30] helps users who are completely new to Solid and shows how to interact with all the Solid core principles by setting up a CSS instance. It starts with showcasing the core Solid HTTP requests, after which it also introduces how authentication and authorization can be combined. For users who want to extend the server there is a specific tutorial [31] that covers all the possible options in which configurations can be modified. An example repository [32] creates a Hello World component and extends CSS with it. Developers can copy that repository and replace it with their own code where necessary. It is fully documented on what the function is of every file there and includes examples on how to add your new component to an existing server configuration or how to set up automated testing for it. Finally, as mentioned in Section 6, the server offers a very large number of configurable parameters for every one of its features. A disadvantage of using this method to configure a server, is that this can be overwhelming for users not familiar with Components.js. To this end, we created a Web-based graphical interfaces [33] that can generate such configurations automatically, based on the selection of desired features. 8. Conclusions & Future Work We set out to create a Solid server with a specific set of requirements, focusing on flexibility, extensibility, and support for specific kinds of Solid users. Due to the usage of dependency injection, it is possible to run many variations of the server with different features, and to create new components that can be added. By structuring the configuration and providing plenty of supporting tools and documentation, we have lowered the barrier of entry as much as possible, making the server accessible for people looking to experience Solid, without hindering users looking for more advanced features. There are several situations in which the server is being used: people created different components to be added to a default installation, it is being used during the testing of client applications, and there are several running Solid server instances making use of this software. More and more people are finding their way to the repository and interacting with it, showing a growing demand for a server that fulfills these needs. Creating a new server in the Solid ecosystem also helps in improving the Solid specifications. By providing an alternative implementation it can reveal hidden assumptions that are not specified, but are depended upon due to the existing implementation having this specific behaviour. Work on the server is not finished yet, there are still many open issues that need to be resolved, many of which are feature requests on how the server can be extended. The CSS aims to take a leading role in shaping future Solid specifications, by providing researchers and developers with a flexible environment for testing and experimentation. Acknowledgements The research in this paper was supported by SolidLab Vlaanderen (Flemish Government, EWI and RRF project VV023/10). The development of the Community Solid Server has been supported by Inrupt, Inc. The authors would like to thank Tim Berners-Lee for his dedication to the Solid project, with one of his many contributions being design and architecture for the Node Solid Server, which crucially informed CSS. In addition, he has been a daily user and tester of our technology, and we are very grateful for his continued feedback and support. References
{"Source-Url": "https://www.semantic-web-journal.net/system/files/swj3665.pdf", "len_cl100k_base": 8704, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43852, "total-output-tokens": 11158, "length": "2e13", "weborganizer": {"__label__adult": 0.0002474784851074219, "__label__art_design": 0.0005702972412109375, "__label__crime_law": 0.00022459030151367188, "__label__education_jobs": 0.0008344650268554688, "__label__entertainment": 7.092952728271484e-05, "__label__fashion_beauty": 0.00014138221740722656, "__label__finance_business": 0.0003132820129394531, "__label__food_dining": 0.0002300739288330078, "__label__games": 0.0004093647003173828, "__label__hardware": 0.0010290145874023438, "__label__health": 0.0002446174621582031, "__label__history": 0.00028443336486816406, "__label__home_hobbies": 9.85264778137207e-05, "__label__industrial": 0.00029468536376953125, "__label__literature": 0.00019478797912597656, "__label__politics": 0.00017333030700683594, "__label__religion": 0.0003275871276855469, "__label__science_tech": 0.033538818359375, "__label__social_life": 0.00010770559310913086, "__label__software": 0.0210723876953125, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.00013172626495361328, "__label__transportation": 0.0002999305725097656, "__label__travel": 0.00015485286712646484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49285, 0.03045]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49285, 0.47128]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49285, 0.909]], "google_gemma-3-12b-it_contains_pii": [[0, 3136, false], [3136, 7483, null], [7483, 11904, null], [11904, 15422, null], [15422, 18962, null], [18962, 22821, null], [22821, 23251, null], [23251, 27717, null], [27717, 32571, null], [32571, 35408, null], [35408, 39464, null], [39464, 43658, null], [43658, 48611, null], [48611, 49285, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3136, true], [3136, 7483, null], [7483, 11904, null], [11904, 15422, null], [15422, 18962, null], [18962, 22821, null], [22821, 23251, null], [23251, 27717, null], [27717, 32571, null], [32571, 35408, null], [35408, 39464, null], [39464, 43658, null], [43658, 48611, null], [48611, 49285, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49285, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49285, null]], "pdf_page_numbers": [[0, 3136, 1], [3136, 7483, 2], [7483, 11904, 3], [11904, 15422, 4], [15422, 18962, 5], [18962, 22821, 6], [22821, 23251, 7], [23251, 27717, 8], [27717, 32571, 9], [32571, 35408, 10], [35408, 39464, 11], [39464, 43658, 12], [43658, 48611, 13], [48611, 49285, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49285, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
a6e2a98b4700ee7cd3d76d77dd633656a4678380
WRITING FOR TRANSLATION GETTING STARTED: Guide October/November 2009 PLANNING AND WRITING FOR TRANSLATION OPTIMIZING THE SOURCE USING TRANSLATION MEMORY ELEMENTS OF STYLE FOR MACHINE TRANSLATION OPTIMIZED MT FOR HIGHER TRANSLATION QUALITY CONTROLLED AUTHORING TO IMPROVE LOCALIZATION Getting Started: Planning and Writing for Translation Believe it or not, setting out to write lyrically beautiful copy for a manual or even the web may not be the most straightforward way to get to clear translation. These authors have some better ideas. Barb Sichel begins this Getting Started Guide with an overview on planning and writing for translation, and then Joseph Campo offers the findings from a project using a translation tool to find already-translated phrases to write the original copy. Ken Clark gives a short guide on writing for machine translation (MT), and Lori Thicke outlines why MT allows for quality translation in the first place. Ultan Ó Broin finishes things with a discussion on controlled authoring. The Editors Planning and Writing for Translation page 3 Barb Sichel Barb Sichel, director of business development at International Language Services, Inc., has over 25 years of sales, marketing and management experience. Optimizing the Source Using Translation Memory page 5 Joseph Campo Joseph Campo, a senior technical writer at Dassault Systèmes SolidWorks Corporation in Concord, Massachusetts, has ten years of technical writing experience. Elements of Style for Machine Translation page 8 Ken Clark Ken Clark, CEO of 1-800-Translate, worked previously as a journalist, screenwriter and speech writer for Japanese and American government officials. Optimized MT for Higher Translation Quality page 9 Lori Thicke Lori Thicke is cofounder and general manager of Lexcelera (formerly Eurotexte), established in 1986, as well as cofounder of Translators Without Borders. Controlled Authoring to Improve Localization page 12 Ultan Ó Broin Ultan Ó Broin, MultiLingual editorial board member and Blogos contributor, works for Oracle in Ireland. He has an MSc from Trinity College Dublin. Documents and online communications are translated to achieve specific objectives. Your goal may be to execute a global communication plan, meet regulatory requirements, avoid liability or drive revenue by addressing target audiences in their native language. Whatever the outcome, you will need clear communication of a single message across all of the languages involved to get there. Lately, cost considerations have become just as important as the accuracy of the translation. Consequently, writing for successful translation today involves planning your project so that you can convey your message within a reasonable budget. Message and scope First, and most obviously, decide what you need to communicate, and communicate it as simply and directly as possible. Determine what is most relevant to your target audience and what you must transliterate to achieve your particular objectives. Take the time to think your project through from the perspective of the recipient, and do some research if you don’t know the recipient’s perspective. Translating everything you publish in English may not maximize the return on your translation investment. You might not have the luxury of translating every one of your product data sheets, for example, so focusing on product line summary brochures instead may be less costly. If it is beyond your budget to translate your entire 200-page employee manual, perhaps you can focus on only those critical policies most needed to protect your firm’s interests. Some projects, such as catalog or website translations, may warrant the creation of abbreviated or revised versions for target audiences. Sections dealing with customer support or how to locate a sales representative, for instance, may need modification so that they are relevant in the geographic locale in which they will be used. Other types of projects require translation of ancillary materials that may not immediately come to mind. Technical documentation for large-scale industrial equipment may also involve translating warning labels and software user interfaces. Again, to save money, perhaps you can omit a section such as the corresponding parts list. If your customers can’t order parts in Japanese by calling your customer service line, why provide a Japanese parts list? Understanding the intent and full scope of your project will enable you to plan your budget and work with your translator to determine the correct order in which to proceed. A phased implementation may be easiest to manage while allowing you to complete the highest priority requirements first. Translation is a meticulous, skilled process similar in nature to technical writing. Layout For printed materials, properly planning your layout even before you start writing copy can greatly influence the ease and cost of managing your project. Quite literally, it pays to understand which factors affect the cost and quality of your translation. Then you can craft your presentation to achieve the desired outcome within your allotted budget. A few things to consider are the choice of desktop publishing application and layout. If this is going to be a printed document with color plates, you might look at whether enough room is left for text expansion to accommodate any graphics. Text will expand in some languages and may contract in others. This has implications for the font sizes and page margins you select, as well as graphics. Chinese characters that need to be reduced to a 6-point font in order to fit on a page will be illegible. Also, check the graphics accessibility. Don’t plan to embed words into layer upon layer of graphics. Your translator may not be able to access them for translation at all or may be able to do so only at great expense to you. Plan to place your text labels beneath graphics rather than inside of them. Text must be “live,” that is, accessible independently of the graphics in order to be translated and reinserted in the same position. The same concept applies to screen shots. Unless you translate your software first and provide new screen shots, the English copy locked within your graphics cannot be accessed for translation. If you must use preexisting graphics, your translator may be able to recommend solutions such as a reference table so that the reader can still understand your message. Too often, project costs are unnecessarily high or the quality of the finished translation is compromised because translation was never considered when a document was originally created. Your copy Simple, straightforward text is easiest to translate. Say what you mean as concisely as possible. Word count is a key factor in the cost of your translation, so, if possible, keep sentences short and limited to a single idea. If English copy already exists for your pending translations, review and revise the content. Formal copy style with correct grammar, spelling and punctuation will be most easily understood by your translator. Consider also your audience’s education level and communication style and then select the appropriate tone. Instructions to a physician prescribing medication should be written differently than instructions to the patient taking the medication. Avoid words with double meanings and references or metaphors that may not make sense in other cultures. Don’t rely on buzzwords, abbreviations, industry jargon, colloquial expression or humor. Create standardized text whenever possible. If you can reuse blocks of copy from one document to the next, you will save time and money on your translations and ensure consistency across all of your written and online communications. If your content is highly technical in nature or your industry-specific terms are prone to multiple meanings, supply your translator with reference material or glossaries for key terms. Links to websites or product catalogs can minimize the need for research during the translation process. Some copy may not translate well or may translate into some languages but not others. Be particularly aware of this if you are creating ad copy or marketing materials. It is worthwhile to check with your translator early, before you have invested heavily in developing graphics or a tagline to accompany your corporate logo. Choosing the right words and the right images or colors for your presentation may make the difference between a seamless translation and one that falls completely flat with your target audience. Acronyms should be avoided. The problem in trying to translate an acronym is that once you translate the theme word, the letters change and they no longer cross-reference to the supporting ideas you want to convey in your target languages. A native-speaking translator is a good resource for spotting things that won’t play well with your target audience. Basic localization — gearing your translated document to a particular country, region or target audience — is usually part of any well-executed translation project. Extensive localization, to the point of creative strategizing, however, is a specialized skill beyond the scope of typical translation projects. If you suspect your project requires an unusual amount of attention, check with your translator. Provide only fully proofread, final copy for translation. Drafts are fine for budgetary reasons, but works-in-progress are unsuitable for translation and will leave yours prone to errors, inconsistencies and higher costs. If you intend to update documents later with new product models or next year’s catalog, the level of attention you devote to tracking changes and version control now will be well worth your effort. Formatting Locate your source files for older documents. This includes all of the desktop publishing and accompanying graphics files. Are they with your graphics design firm or archived somewhere within your organization? Your translator may not be able to replicate your formatting and graphics without them. Provide files to your translator in the same format you would like to receive back. PDFs are fine for reference, but depending on the size of your document and the application used, having the source files available may significantly impact the time to quote your project, the cost of your project and the appearance of the final output. If you are working from hard copies or scanned documents, manual processes will have to be employed that will similarly affect your project. Given the source files, most translation firms can replicate standard file formats, even for software code. How you present content for translation impacts cost, timeline and the ease of implementing your project. If you do any cutting and pasting at your end, have your translator provide a “post-format review.” This ensures proper text flow and the overall quality of your presentation before you print or post it on the internet. Costs for this service are usually nominal and can prevent potential embarrassment. Formatting foreign character sets on your own can be a challenge, even for an experienced graphics person, and you may not have the right tool set. Languages such as Arabic that read right to left require special software versions and the ability to reorient everything on a page. It is best not to attempt this on your own. If you are translating software for user interfaces, handheld LCD screens or similar uses, be prepared to answer questions about your ability to handle foreign character sets, space limitations and other factors that specifically affect these types of projects. If you need to resize short translations to fit an ad or label, ask for an Adobe Illustrator EPS file that has been “outlined.” This provides the best of both worlds. It is locked down like a graphic to eliminate the possibility of introducing errors during formatting, but leaves flexibility for resizing. You can format the text to meet your needs, even for a character set that you may not have installed. Lastly, use the right application for your project. Some applications play well with the automated tools employed by translation firms while others require a lot of manual manipulation. Microsoft Word works fine for short documents, but FrameMaker may be a better choice for large manuals. If you use charts, live embedded links or manually inserted multiple carriage returns, the level of difficulty in working with your files for translation will increase, and this will impact your cost. Timelines Translation is a meticulous, skilled process similar in nature to technical writing. Though you provide the concept and the source files, your translator must take time to fully comprehend your meaning and find the best way to replicate the tone and content in his or her native tongue. Often there is research involved or requests for you to provide clarification. Your project involves much more than merely translation. Numerous details are involved in preparing your files for translation, gaining commitment from the best qualified translators, proofreading, formatting and ensuring proper quality control. For multiple language translations, managing your project becomes even more complex. If you make a single change, it needs to be disseminated across teams of translators and proofreaders for each language. Allow realistic timelines for your projects to be completed. A simple brochure may take several days, while a 300-page manual may take several weeks. Advise your project manager in advance if you must meet a specific deadline so that your project can be managed accordingly. Partnering with a vendor Since the quality of the translations you publish reflects on you and your organization, establishing a comfortable working relationship with your vendor is essential. Carefully crafted branding strategies can be derailed in an instant by sloppy or inaccurate work. Even a single poorly chosen word can alter your intended meaning. And just imagine your customer purchasing a piece of equipment only to find that the documentation doesn’t make sense or that the table of contents doesn’t match the order of the text. You will rely on your translation vendor to provide you with accurate translations that are audience appropriate and delivered, print ready, within the specified timeframe. You should also educate yourself as to their quality processes and experience level with projects similar to yours so that you can move forward with full confidence. While there is no single industry certification for translations, there are third parties such as TÜV of the American Translators Association that provide quality testing and auditing. It is perfectly acceptable to ask for credentials. In many cases, your own in-house quality policies or regulatory requirements demand that you do. The Guide From MultiLingual How many times have you written something and known that you wrote something similar, but can’t remember where it was or how it was written? If you could only find that text and replicate it, you would save money and time for your translation team by reusing already-translated text strings and would produce more consistent documentation. This article describes a pilot project that tested a potential solution to this issue using translation memory (TM). I hypothesized that if our technical writers could tie our authoring process into an English TM that contains already-translated text strings, we could find existing English text strings, reuse them on new topics and lower our translation costs. In effect, the documentation team would pretranslate their new English documentation to maximize matches against existing English text strings before sending topics to the translators who use the same TM. We would use the English (source language) TM to improve the quality of fuzzy matches and reduce the number of words. Fuzzy matches indicate a percentage match of new or changed text against existing already-translated text. A higher percentage fuzzy match means the text string more closely matches existing translated text. The higher percentage the fuzzy match, the lower the cost to translate the text. Totally new text strings are the most expensive to translate, so I tried to reduce new words used. Because we translate into 12 languages, there is a great potential for cost savings. After approval of the pilot project, I worked with my manager to schedule two months of project time. The translation team manager provided me with a TM tool — Trados, in my case — and I was ready to start the project after several days of training. Project design We use RoboHelp HTML to create online help and deliver multiple compiled help files (chms). I chose the main SolidWorks help to use in the pilot because it is our largest chm, with approximately 2,000 topics. I went back in time and created an English TM. I collected 73 new and 39 changed topics that documentation had actually sent to the translation team during the SolidWorks 2007 development cycle. I used the Analyze tool in Workbench to obtain an original estimate of a full-cost translation. I also obtained an original estimate for a full-cost translation for the same topics from our outsourcing localization vendor, using German as the target language. A dual monitor setup was essential to this project. On my right monitor, I opened Workbench and ran topics individually through the English TM to pretranslate them. On the left monitor, I opened the original HTML topic that had been sent to translation. When I ran a topic through Workbench, it provided a percentage match of the new text against the existing TM on a string-by-string basis. I used these suggestions to change the English source text in HTML on the left monitor and to improve the percentage of fuzzy match. I also paid strong attention to trying to reduce the number of new words. After pretranslating each topic, I used the Analyze tool in Workbench to gauge and record the amount of savings for each topic. When I completed pretranslating all the topics, I calculated the costs and savings using the research data. I also obtained a translation cost estimate from our outsourcing localization vendor for the now pretranslated topics. Results In both Table 1 and Table 2, results show a modest reduction in per-language translation costs when comparing the original cost estimates to the cost estimates after using Trados to research the TM (post-Trados). <table> <thead> <tr> <th></th> <th>Original cost</th> <th>Post-Trados project cost</th> <th>Savings</th> </tr> </thead> <tbody> <tr> <td>New topics</td> <td>$4,807.64</td> <td>$4,301.79</td> <td>$505.85 (10.5%)</td> </tr> <tr> <td>Changed topics</td> <td>$1,957.97</td> <td>$1,554.78</td> <td>$403.19 (20.6%)</td> </tr> <tr> <td>Grand total</td> <td>$6,765.61</td> <td>$5,856.57</td> <td>$909.04 (13.4%)</td> </tr> </tbody> </table> Table 1: Cost estimate — vendor full-cost translation (includes translation, review and layout/DTP). <table> <thead> <tr> <th></th> <th>Original cost</th> <th>Post-Trados project cost</th> <th>Savings</th> </tr> </thead> <tbody> <tr> <td>New topics</td> <td>$4,463.57</td> <td>$3,837.79</td> <td>$625.78 (14.0%)</td> </tr> <tr> <td>Changed topics</td> <td>$1,322.25</td> <td>$1,049.49</td> <td>$272.76 (20.6%)</td> </tr> <tr> <td>Grand total</td> <td>$5,785.82</td> <td>$4,887.28</td> <td>$898.54 (15.5%)</td> </tr> </tbody> </table> Table 2: Cost estimate — Trados Workbench Analyze tool full-cost translation (includes translation, review and layout/DTP). Details — new topics: I created charts to display the percentages of fuzzy matches in the original versus the post-Trados topics. The post-Trados topics (Figure 1) showed an increase in the number of 100% matches and a decrease in the number of No Matches. In terms of percentages, there was also an increase in the number of 50%-74% fuzzy matches. The remaining fuzzy match ranges were approximately equal to or less than the percentages of the original new topics. - Total words reduced by 10% (2,028 words). - 100% match increased by 439 words. - No match reduced by 1,613 words. Details — changed topics: Changed topics are existing topics with changes to already-translated text. These charts revealed a similar trend as with new topics. In terms of percentages, the post-Trados changed topics showed an increase in the number of 100% matches of about 10%. The remaining fuzzy match ranges were less than the percentages of the original changed topics. Overall, there is a greater percentage of 100% matches and a smaller percentage of no matches compared with the new topics. Analysis The cost estimates are within acceptable deviations that permit me to say that the Analyze tool results are defensible. I discussed the deviation with a senior employee in our research department. Standard deviations are complex to calculate and vary based on many parameters. When I provided the deviations, particularly for the new topics, the research employee felt that the 3.5% difference was within an acceptable deviation range (Table 3). I then met with the translation manager to discuss the difference in costs between our outsourcing localization vendor and the Analyze tool. The translation manager confirmed that translation costs will vary depending on the vendor, the language, and the services provided. Having multiple variables makes it impossible to provide an exact cost estimate to fit all situations. The translation manager informed me that only about 50% of our outsourced translation items require full-cost translation. She suggested we apply a different cost metric to the other 50% of our outsourced translation items; this metric is called raw translation, which includes translation of the text only. The savings in percent are similar to full-cost translation using the Analyze tool. Notably, for new topics, raw translation saved 14.1% while full-cost translation saved 14% (Table 4). For the purposes of this pilot, I decided to split the difference between the outsourcing localization vendor savings of 10.5% and the Analyze tool full-cost translation savings of 14% and to use an estimated savings of 12.2% for new topics. This seemed like a reasonable compromise. Savings projection Outsourcing costs for a typical release vary from $100,000 minimum to $400,000 maximum, depending on how many new products and services requiring localization are added to our suite of products, their length, and the number of languages supported. If the process was applied to all new documentation that we send to translation for outsourced localization, an estimated cost savings of 12.2% (between $12,200 and $48,800) would be achieved (Table 5). <table> <thead> <tr> <th></th> <th>Original cost</th> <th>Post-Trados project cost</th> <th>Savings</th> </tr> </thead> <tbody> <tr> <td>New topics</td> <td>$2,409.98</td> <td>$2,071.03</td> <td>$338.95 (14.1%)</td> </tr> <tr> <td>Changed topics</td> <td>$ 815.93</td> <td>$ 629.67</td> <td>$186.26 (22.8%)</td> </tr> <tr> <td>Grand total</td> <td>$3,225.91</td> <td>$2,700.70</td> <td>$525.21 (16.3%)</td> </tr> </tbody> </table> Table 4: Cost estimate — Trados Workbench Analyze tool raw translation (includes translation only). <table> <thead> <tr> <th></th> <th>New topics savings</th> <th>New topics deviation</th> <th>Changed topics savings</th> <th>Changed topics deviation</th> <th>Grand total savings</th> <th>Grand total deviation</th> </tr> </thead> <tbody> <tr> <td>Vendor</td> <td>10.5%</td> <td>33.3%</td> <td>20.6%</td> <td>0%</td> <td>13.4%</td> <td>17.9%</td> </tr> <tr> <td>Analyze tool</td> <td>14.0%</td> <td></td> <td>20.6%</td> <td></td> <td>15.8%</td> <td></td> </tr> </tbody> </table> Table 3: Full-cost savings comparison/deviation between vendor and Analyze tool. Changed documentation is typically localized by our in-house translation team, so for us there will be no “cost savings” per se. However, the translation team would experience a time savings of between 20.6% and 22.8% because of the increased quality and number of matches as well as the reduced word count. Pilot project conclusions Using a TM tool is viable in pretranslation only if we consider its value in increasing the consistency and quality of documentation. I could not justify using the tool on just a cost-savings basis alone. Savings were achieved by both reuse of existing text and aggressive word-count reduction. However, the anticipated translation savings only partially offset the cost of the skilled writer’s time in editing. I spent approximately 30 minutes per topic using my TM tool. For the 73 new topics, I spent approximately 37 writer hours. Using $50 cost per writer hour, I spent $1,850 in time to achieve only $625 savings in outsourcing costs. Labor costs were triple the savings achieved, for one language. Actual cost savings are only achieved when factoring in that we translate into 12 languages. Savings = total cost savings ($7,500) – time spent ($1,850) = $5,650. If the TM tool were used to only search for reusable text (no word reduction), the results would be even less impressive (estimated 2.4% savings in outsourcing costs). Beyond the case study: related research I queried translation experts as to whether any similar projects had been undertaken. Authoring memory tools have been around for over ten years. An article by Jeff Allen in 1999 discussed how authoring memory could be used in conjunction with controlled language to aid in translation (www.transref.org/default.asp?dscsrc=/u-articles/allen2.asp). The new Author-it product, for example, uses fuzzy logic matching within a content management system. I contacted Nabil Freij, president of GlobalVision, and according to him, this pilot project was a unique approach. The key to reducing localization costs is to reduce word count. Some companies are implementing controlled English to reduce word count, increase the 100% matches, and also to transition to machine translation (MT). According to Freij, “MT engines can perform better under restricted and controlled vocabulary.” In his experience, “most tech pub writers do not want to deal with localization issues during the authoring stage, they are simply too overworked. . . . We often can’t get them to edit their work, let alone reduce the word count or make it consistent.” I have been following the progress of the SDLX AuthorAssistant (SDLXAA) product, which seems similar to my pilot project. SDLXAA lets writers write, then runs the document against a TM to offer suggestions for improved matches and reuse. According to Sue Blaisdell, information architect at Avaya, “with AuthorAssistant, you can connect to TMs for your project, and it will display 100% and fuzzy matches to the writer. It also gives the writers insight into the way that changes they make in their English content affect the localization costs.” This pilot project indicates that translation cost savings can be achieved using TM, but at a cost in labor and time. With usage, writers would become more proficient with the system and save time. Your company would have to be ready to absorb license and time costs. If you are going through a major restructuring of your documentation, perhaps upgrading to XML, this might be the perfect time to examine your documentation in detail with your translation costs in mind. One benefit I found was that while using my TM tool, I was fully focused on reducing word count because I kept translation as my main focus. Word reduction is hard to achieve in normal writing mode because the technical writer is normally not thinking about it. According to Freij, “verbosity is the enemy. It pays to be concise and straight to the point, eliminating unnecessary text when localization is imminent. When writing technical documents, remember that simplicity is also very much desired by the end-user.” It is also important that your TM be as clean as possible. What writers need is a TM tool that runs side-by-side with an authoring application and can semi-automatically offer suggestions on how to better match new text to the existing TM. The development of SDLXAA and Author-it’s new application should give us hope that tools are becoming available to bring technical writers and translators closer together to achieve cost savings by leveraging valuable memory resources. -- <table> <thead> <tr> <th></th> <th>Outsourced localization cost savings</th> <th>SolidWorks translation team time savings</th> </tr> </thead> <tbody> <tr> <td>New</td> <td>$12,200 to $48,800</td> <td>n/a</td> </tr> <tr> <td>Changed</td> <td>n/a</td> <td>20.6% to 22.8%</td> </tr> </tbody> </table> Table 5: Annual estimated savings if Trados is implemented for all new and changed documentation. UPCOMING EVENTS LOCALIZATION WORLD CONFERENCE & EXHIBITS 2009 Know-how for Global Success - October 20-22 - Hyatt Regency Santa Clara, Silicon Valley, California 2010 - 7-9 June - Hotel Maritim proArte, Berlin, Germany - October 12-14 - Bell Harbor Conference Center, Seattle, Washington info@localizationworld.com www.localizationworld.com October/November 2009 • www.multilingual.com/gsg Elements of Style For Machine Translation Ken Clark We have entered the Machine Translation Age. Demand for human translation is still increasing dramatically—or was until this year—but the vast majority of the world’s translation is now done by computer. And the vast majority of machine translation (MT) transactions is completed using free online translation tools such as Babel Fish or Google. The result usually leaves much to be desired, and there’s not much you can do about it when you are translating someone else’s content, particularly if you don’t understand the source language. But you can dramatically improve translation of content you write yourself and share with others in a foreign language, without using the special software and workflows of powerhouse automated translation systems. Just a few simple writing tricks can make a dramatic difference in MT quality. It’s not controlled language, but language control. Writing clearly, whether for man or machine, is always a struggle (at least for me), and the dim machine minds of the translation tools are unforgiving when it comes to bad composition in source. Unlike us humans, MT tools have no sense of context, no appreciation of an author’s intent and definitely no sense of humor. With Strunk and White’s The Elements of Style as inspiration, here’s an abbreviated guide to good English style for improved MT. - Use short sentences. Keep it simple. Cut the clauses. Ditch the sentence fragments. Simple sentences and grammatical structure (subject-object-verb) are the only way to go. - Avoid ambiguity, as in “I saw her duck.” Well, which is it? A duck that quacks that belongs to her? Or was she avoiding a flying object? Look for multiple meanings when proofing. Good luck. If you don’t find it, your MT tool may just do it for you. - Remove extra words. Editing out unessential phrases and extra words will make for a simpler, better translation. Since the algorithms have fewer translation variables to wrestle with and better style with fewer words in the translation, it will also be more accurate. - Don’t remove necessary words, and don’t go too far with editing. In English we drop a lot of words when we write, especially when writing informally. Keep those articles, prepositions, pronouns and so on where the machine can find them. English speakers are able to fill in the blanks and fully understand—not so when the reader is a translation engine. - Misspelling does not compute. A misspelled word will not translate—end of story, end of translation. - Ditto on punctuation. One accidental period can completely change the meaning of a sentence and trash your translation. Spell-checking and proofreading after you write and before you translate are pretty basic quality assurance steps. The simplicity and clarity of expression demanded by MT tools would meet the approval of Strunk and White, I hope. - Slang is so like, whatever. No slang and no jokes for MT. Irony is the first thing lost in MT. Stay earnest and formal. That’s why pithy headlines and snappy newspaper copy so often translate badly with these tools. Rule of thumb: Good MT style comes in one flavor . . . plain vanilla. - Use “Do not translate” coding. Some MT tools will allow you to place code around a word or phrase, which allows the word to pass through the engine without getting translated. - Check your translation. So after closely studying and applying all these rules before translation, how can you know if your MT output makes any sense? Translate the output back using an MT tool. That reverse translation may help you to spot the most glaring errors. Recast those problem sentences in English and see if the back translation gets any clearer. Don’t expect miracles here. But it may be some comfort to know that the original translation is better than the back translation. - Keep source and target together. No garbage in the MT tool, less garbage out. But garbage there will be. That’s why we like to keep a copy of the source with the target translation to create a bilingual output so that those errors can be spotted and corrected later if need be. - Identify MT. Avoid blame by giving credit. Letting people know that you used a machine to communicate with them allows them to read with caution, and keeps them from feeling you’ve been short-changed on a real translation. On the writers’ craft Using a little bit of discipline to prepare content for MT extends the functionality of these tools for people busily engaged with others in multiple languages. Writing for MT, just like writing for human translation, is good writing practice. Translation has a way of highlighting communication errors that are invisible or ignored in just a single language. The simplicity and clarity of expression demanded by MT tools would meet the approval of Strunk and White, I hope. I’ve still got a dog-eared, ratty, old copy on my desk, where it shall remain. Everyone knows machine translation (MT) has enormous potential for dramatically reducing translation cost and increasing speed. But who thinks of MT as a way to improve quality? ISO 9001-certified for the last decade, my company’s quest for quality has unexpectedly led us to MT. Along the way we’ve developed and tested a number of different processes for MT and discovered that correctly optimized MT can actually improve quality — and for less cost and with higher rates of productivity. Under the right conditions, MT actually breaks those compromises we’ve come to accept in the traditional localization paradigm. You may want price, speed and quality, but here’s the kicker: you only get to pick two out of three. MT can offer all three. However, the truth is that for most people, quality MT is still an oxymoron. And who could blame them? MT: always five years from perfection Just about any of us with an internet connection has had first-hand experience with MT. We have probably used SYSTRAN to translate an e-mail or ProMT to give us the gist of a web page. We may have conversed with someone in another language via Google’s translation center, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online or, more recently, read Wikipedia in Thai thanks to Asia Online. Along the way, MT will have amused us with its inadvertent twisting of human language. Most people would agree that “out-of-the-box” MT is far from what it is supposed to be: fully automatic quality translation (FAQT). This has been the promise held out to our industry since the very first MT system translated 49 Russian sentences into English using a 250-word vocabulary and six grammar rules. Fifty years later we’re still waiting. As Hans Fenstermacher of Translations.com says, “MT has been five years from perfection since 1952.” It could be that our overwrought expectations for MT partially explain the slow uptake of MT by the translation industry. Against the benchmark of FAQT, MT is sure to disappoint. For those resigned to the lack of quality with unoptimized MT, there’s always the unfortunately named FAUT (fully automatic useful translation). FAUT is essentially “gisting” translation, which is a more or less accurate approximation of the source text. Today, gisting is overwhelmingly the use to which MT is being applied and accounts for even more words translated than by humans. If the claim that MT translates more than humans seems outrageous, consider that an estimated 30 million e-mails are translated by MT every day. For internauts, instantaneous gisting (gist-in-time) provides a basic understanding of an e-mail or a website. In the corporate space, gisting is used for legal discovery, for patent or technology searches, or to identify parts of larger corpora that merit being translated by a human. But how much gisting do we humans really need? Not much, as it turns out: for all the profusion of free, software-as-a-service and off-the-shelf MT solutions, commercial translations, which need more than gisting quality, are by and large assured by humans. For the vast majority of corporate needs, MT is staying on the shelf. If FAQT is still “five years away” and FAUT is simply not that useful after all, the question is whether to wait for MT to catch up to our aspirations for it or to invest in processes that can optimize the MT we have today. How MT improves quality Once we stop waiting for quality MT to emerge fully clothed from the loins of a research and development lab somewhere, we can start to see MT for what it is: an efficient solution that can assist human translators by taking out a large part of the drudgery in translation. The reality we are seeing every day is that for technical translations ranging from software to manuals to catalogs, quality MT is achievable. But like any relationship, you have to work at it. In fact, correctly optimized MT — that’s the “working at it” part — paired with human post-editors can actually improve quality. How could this be possible? In the first place, correctly customized MT (customizing MT engines is a skill in itself) removes terminological inconsistencies. If the source document always uses the same term, so will the MT engine. This resolves the real problem of teams of translators working on the same project but employing divergent terminology. Across a large project, MT can also ensure a more consistent tone, with less stylistic discrepancies. Furthermore, MT removes that human element of non-quality: omissions. Enforced, validated terminology, consistency and completeness are MT’s strengths. But what about mistranslations? There’s no question that MT delivers more of its fair share of sentences that mangle the meaning of the source text. This is where the post-editors come in. Working on a bitext format, a post-editor correcting MT output will frequently scrutinize texts more carefully than a reviewer working on human output. On large-volume localization projects, T + E + P (translate + edit + proof) as a process may be interpreted differently by different language service providers. T + E + P on a million-word project may consist of T + a sampling review of 20-20. The source text may or may not be consulted at the same time. MT affords you no such luxury. Because MT can and does go completely off the rails from time to time, each and every segment must be examined in bitext format and approved or rewritten by a human posteditor. If only every translation received that type of attention! This process for review and correction, if properly managed, should not only catch and fix the errors, but should also yield an accounting of what changes need to be made to the MT engine itself. This goes to the heart of any good quality system, such as ISO 9001: ensuring quality at the source — that is, catching errors at the beginning rather than correcting them downstream — and, crucially, instituting processes for continuous improvement. Correcting systematic errors and then feeding these corrections back into the MT engine is what we call “the Virtuous Circle of MT Quality.” This, too, is an integral part of the optimization process. What quality do you need? But what quality is good enough? Any good process defines its quality expectations up front, and working with MT is no exception. MT quality has been measured by the wrong yardsticks to the detriment of the elegant solution that MT can be when matched to the type of result needed. The question is not whether MT is “better” than a human translation on a given text. Rather, the question is what quality is necessary for a particular project and what process — human only, human + translation memory (TM), human + TM + MT — will best allow you to achieve that exact level of quality. The 2008 version of the ISO 9001 standard introduces the idea of customer-defined quality to the international norm. This is an important distinction to make. Accuracy, consistency of style, correct terminology, spelling and punctuation, and completeness are all inarguably elements of a quality translation. But how much quality is required for a given situation? “Doesn’t read like a translation,” for example, is the type of quality that a marketing translation would need to have in buckets. We may not have a specific metric for defining marketing quality, but we sure know when it’s not there! But what about software? A catalog? E-learning courseware? A knowledge base? This is where the quality question begins to get more nuanced. For software, quality may be defined as accurate, understandable and rapid enough for simphs. For a catalog, correct terminology on each of thousands of items is primordial. For courseware, the material needs to promote learning. For a knowledge base, customers need to be able to resolve their problems without further recourse to the help desk staff. Since MT allows you to calibrate the human effort (linguistic training, post-editing) that you put into achieving the quality levels you need, setting quality requirements in advance is an essential step. The example of online help and knowledge bases above demonstrates the importance of customer-defined quality. It’s well known that human reviewers will often designate only extremely high quality as acceptable. However, when the choice is between an imperfect translation and no translation (information available only in the original language), customers themselves weigh in heavily in favor of raw — that is, fully automatic — MT. Don DePalma of the Common Sense Advisory says, “Whether it’s FAQT, FAUT, or perfectly rendered output, the biggest decision that companies will have to make about machine translation is whether any of those are a worse alternative than no translation at all. Given the enormous volumes of content that companies and government should make available for other markets, for me and many of the organizations that we talk to, the quality question is ultimately a non-issue. What we call the ‘zero translation’ option of doing nothing means no information, service or customer satisfaction at all.” Customers also report that support articles translated by MT are just about as effective in solving their problems as human localized content and at a price far below what human translations would cost. This is not about depriving translators of work. Human translations would not have been economically feasible for the hundreds of thousands of knowledge base articles in various languages — including Chinese, Japanese, Portuguese, French, German and Spanish — that Microsoft publishes online. This would have required an initial outlay for approximately $30 million per language, according to Microsoft itself, not including weekly updates. Instead, Microsoft chose its own hybrid MT system to translate content that would otherwise not have been translated. Measuring the results, the company found that across all languages, MT helped solve customer problems on average 23% of the time. This figure may seem low, but it’s only slightly below the success rate of 29% for human translation. Microsoft concluded at a presentation to the 11th Machine Translation Summit in Copenhagen, Denmark, in September 2007 that “customer satisfaction numbers for machine translated articles is comparable to and sometimes exceeds original English!” Optimizing MT Regardless of the quality level MT is to achieve — publishable quality or simply understandable quality — unoptimized MT is just not up to the job. While some sentences coming out of untrained MT engines may be stunningly good, others will be pure gibberish. And without effective training, there is no way to ensure that the terminology you want will be consistently applied by the MT engine. Training, then, is the secret sauce of good MT, even more important than what system you choose, whether rule-based or statistical (see sidebar). This is also one of the areas that requires the greatest investment. For statistical machine translation (SMT) systems, this training involves not only extensive corpora of bitext (think in terms of millions of segments), but also glossaries and monolingual texts. The more the better. Imagine Steven Spielberg’s little alien, ET, saying “Need more data.” That’s SMT in a nutshell. Rule-based versus statistical MT There are two major streams in MT technology: rule-based MT (RBMT) and statistical MT (SMT). These two methods, espoused by various MT technology vendors, represent two different routes to the same place. The earliest systems were rule-based, among them SYSTRAN. For the development of RBMT systems (SYSTRAN, ProMT, Lucy), various languages were broken down into their parts of speech and grammatical rules were hard coded, along with dictionaries. An RBMT system would never say un noir chat but un chat noir, coded, as it is, with the knowledge that adjectives follow nouns in French. Exceptions such as une vieille dame would also be coded in the system. SMT, on the other hand (Google, Asia Online), uses an algorithm to parse vast numbers of bilingual sentences (preferably in the millions) in order to extrapolate relationships, including word order. Un chat noir would appear as the translation of a black cat if it had seen that in the training phase. However, blissfully ignorant of the rules of grammar (with the exception of Asia Online), SMT would be likely to incorrectly translate a green cat as un vert chat because it wouldn’t have encountered any green cats — unless trained on Dr. Seuss. Both RBMT and SMT systems have their advantages and disadvantages. Both are capable of delivering accurate, fluid sentences, depending on how they were trained. Both can also deliver utter gibberish — again, depending on how they were trained. RBMT wins the day when you don’t have millions of words of training corpora; SMT is the victor when it comes to adding a new language pair, a major multiyear undertaking when preparing an RBMT system. Hybrid systems such as SYSTRAN’s are capable of bridging the gap between RBMT and SMT. Testing will provide information on the level of fuzzy match that should be discarded in favor of MT segments. However, it’s usually useful to make sure that new MT segments are identified as such to distinguish them from validated TM segments. Long, convoluted sentences do not lend themselves to MT, no matter how well trained the system is. The capacity of MT to function as a standalone will depend on the quality required and on how well the engine is optimized through stringent training, ongoing maintenance, controlled authoring and so on. But for publishable quality, human post-editors are essential. In this regard, MT can be seen as just another tool in the translator’s toolkit, much like any CAT tool, albeit one that’s more complex and expensive to set up. In optimizing MT, post-editors need to be trained in post-editing techniques, and they need to know what level of quality is expected. Besides post-editing, other post-production optimization techniques include use of QA tools, automatic post-editing through regular expressions, text normalization, updating of the TMs and so on. And above all, it is essential that there be ongoing tuning of the engine with new and modified terminology and error corrections in a continuous, virtuous cycle of feedback and improvement. If all these processes, from pre-production to post-production, are instituted to optimize MT output, what kind of quality can be expected? Recently one of our clients, a major software publisher, noted in the report “Leveraging a crisis for innovation (or never let a good crisis go to waste)” that “contrary to all expectations, using MT in [our company] has improved the translation quality . . . with the reviewer commenting ‘It was nearly 9 — it was the best translation of coursework I ever read.’” It has long been believed that buyers of translation services must compromise. In the traditional localization paradigm, if you want speed and quality, you have to compromise on price; if you want speed and price, you have to compromise on quality. MT is often associated with a compromise of quality in favor of cost and turnaround improvements. However, the reality is that correctly optimized MT can break these compromises by offering faster throughput, lower costs and higher quality. But you have to work at it. CONTROLLED AUTHORING TO IMPROVE LOCALIZATION ULTAN Ó BRION Controlled authoring, broadly speaking, is the process of applying a set of predefined style, grammar, punctuation rules and approved terminology to content (documentation or software) during its development. Many companies offer some form of guidance to their content developers, either through tools or more ad hoc means, of course, so this may not seem at all remarkable. In the last few years, however, innovations in linguistic processing technology and its commoditization indicate that controlled authoring holds great potential for anyone seeking a tool-driven approach to maximizing returns from the localization process. This has particularly paved the way for the adoption of cost-effective machine translation (MT). Controlled authoring and languages are complex, so this article concentrates on the localization-related aspects of introducing controlled authoring into an organization that must localize its content. Controlled authoring itself is frequently conflated with other parts of the overall content development process, notably that of content management, a separate but contributing function. Isolating the non-technical essence of controlled authoring is made all the more difficult by the range and interplay of tool functionality offered by various vendors. Whereas seemingly subtle distinctions do not always make a great deal of sense from an overall business process engineering viewpoint, it’s important to understand from a localization perspective just how controlled authoring technology works. For example, if the storage of objects in the content management system (CMS) allows reuse at a level higher than the translation memory (TM) segmentation does, localization saves are limited. It may be more helpful from a business requirements position to regard controlled authoring as an information quality process that consists of many different parts: data mining for rule and terminology research and creation, new terminology harvesting and rule development, reuse management, reporting on quality, and so on rather than purely approved rule and term application during the actual text-editing phase. Controlled languages It’s not uncommon for organizations to have no serious control over their content style rules and terminology or to rely on manual processes, combining in-house guidelines with the commonly applicable rules and recommendations of sources such as The Chicago Manual of Style, while working off spreadsheets of terms and applying simple checks for consistency and using human editing to meet their “quality” requirements. For some this is acceptable; however, it is hardly a scalable, enforceable or measurable process. We’ve all seen the waste of many possible opportunities for localization efficiencies — let’s save the content development efficiencies for another audience — because manual enforcement and voluntary uptake of authoring guidelines allow for a good deal of subjectivity in interpretation and application. Controlled authoring is much more objective as the selection, application and enforcement of such guidance is programmatic. The application of rules “controls” the authoring, so to speak, allowing content developers to avail of the rules directly through the authoring user interface: looking up alternative words, phrases and terms, reusing already written phrases, harvesting and storing new ones, and reporting on the content’s compliance with the rules immediately or afterwards. The origins of the controlled language concept are far from the needs of modern day localization, rather being designed to improve comprehension of the source language by simplifying matters for nonnative English speakers of English (“human orientation”) or computers (“machine orientation”). Often, these nonnative readers worked in the maintenance and service field, and were targeted by probably the best-known iteration of a controlled language: ASD-STEP200 Simplified (Technical) English. The genesis of the controlled language concept can be traced back to Ogden’s Basic English from the 1930s and established over the years through such developments as Caterpillar Technical English, Nortel Standard English, the Plain English Campaign, GM’s Controlled Automotive Service Language, Global English and so on. The introduction of structured authoring through SGML and later XML, along with more innovations in linguistic processing and database storage, allowed for the development of and application of targeted rules to meet customer requirements driven by content type and market, reflected by the ability to now apply a controlled authoring process through common authoring tools such as Microsoft Word, PTC Arbortext Editor and Adobe FrameMaker through plug-ins. The use of an approved set of terminology, where each term has only one meaning in that context — consider the different translations for the out-of-context word job, for example — and clear and enforceable authoring rules allow writers to achieve a high degree of consistency in the source texts they create, not only in the words and terms they use, but how they use them. Consistency in constructing phrases, along with eliminating complexity, ambiguity and verbosity, is the key to maximizing TM use and MT potential (and large efficiencies on the production side). What might these controlled language rules entail? Well, the number can vary, could be as many as 10 to between 50 and 100, but typically might relate to standardized spelling, length of sentence, number of clauses, use of active versus passive, simplifying tenses, rules for noun phrases, modifiers, syntactic cues, past participles, gerunds, avoidance of Latin phrases, slang and so on. I recommend Jon R. Kohl’s The Global English Style Guide if you need a valuable starting point and reference material for possible rules as well as Sharon O’Brien’s “Controlling Controlled English” paper (www.mt-archive.info/CLT-2003-Obrien.pdf) for recommendations on the rules central to content intended for MT. Naturally, the rules vary by content type and audience. Gerunds may be acceptable in headings, but not main text without qualification, delimiters may not be required... <table> <thead> <tr> <th>Feature</th> <th>Solution 1</th> <th>Solution 2</th> <th>Solution 3</th> <th>Weighting</th> </tr> </thead> <tbody> <tr> <td>Price</td> <td>NLP-level verification of terms, grammar and style according to our requirements</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Prompting of writer to reuse of existing segments from CMS</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Scalability (multiple users, concurrent users, performance)</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Easy maintenance of rules by existing, in-house resources</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Percentage of existing rules from style guide that can be automated</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Integration with existing translation glossaries and exchange formats</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>New terminology harvesting</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Basic rule set supports translation memory requirements</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Basic rule set supports machine translation readiness</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Automatic reporting on quality in batch and single mode</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Interactive quality assurance through editing environment</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Allows prioritization of rules for grandfathering of content</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Supports multiple rules and terms by content type</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Plug-ins and integrations for existing authoring tools</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Automatic indexing capability</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Customer references include MT and TM savings</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Open standards or proprietary architecture</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Established user group and conference</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Global 24 x 7 support</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Figure 1: Business requirements can be weighed against a variety of solutions. on software strings but are required on messages or documentation, and so on. Thus, solutions that allow for forms of semantic checking have an advantage. **Controlled authoring solution: which one?** Commercially available controlled authoring technology solutions range from the more sophisticated, scalable technology-based solutions based on advanced linguistic processing to less complex content management-based offerings, “methodologies” and combinations of same. Possible options include acrolinx IQ, IAI CLAT, Author-it, Smart MAXit, Boeing Simplified English Checker, SDL AuthorAssistant, Shufra and more. For those interested in researching a controlled authoring option, possible sources of information are IDC reports, ELDA, *International Journal of Language and Documentation*, CLAW proceedings, Localisation Research Centre publications, DCU papers from Sharon O’Brien and MA research by Patrick Cadwell (www.localization.ie/resources/Awards/Theses/PatrickCadwell_Thesis.pdf), the publications of Jeff Allen (www.geocities.com/controlled_language), the MT, Localization Professional, and Information Quality groups on LinkedIn, and so on. The decision as to which controlled authoring solution to adopt is driven by business requirements. Localization teams should ensure they’re involved in the identification of these, so come armed with facts and figures for the business case. Initial business requirements when applied to a range of solution possibilities might look something like Figure 1. Requirements vary by organization, naturally. Prioritize and weight each point before making a decision among competing alternatives. **Benefits to localization** The clear benefits of controlled authoring in the localization space are derived from the improved source quality making content easier to translate by humans and machine. It’s a fundamental recognition that the basic internationalization concept of assuring translatability and high-quality source content results in greater savings accruing to the organization at the localization stage than trying to continually negotiate lower prices with vendors or praying for a quantum leap in translation technology to turn garbage source into localized gospel. Efficiencies are magnified in a one-to-many relationship as the number of languages translated increases. Controlled authored content is consistently expressed in an understandable way. This results in translators not needing clarifications, maximizing TM matches, eliminating the need for terminology creation after localization starts, and providing texts more easily processed by MT, cutting down on post-editing needs and recalibrations. Volumes too, are generally smaller, reducing cost and time-to-localize per se. Bear in mind, however, that these savings are a function of the rules created, as well as how and when the texts are translated. Overarching internationalization rules also impact the efficiencies as well as the technical review of the source text by domain experts. If a switch should be documented as being “off” instead of “on,” then don’t expect controlled authoring to eliminate any language version testing issues. Do you need controlled authoring technology in order to use MT? The simple answer is “no.” But if you need a scalable approach to ensure your source text meets realistic MT business requirements by providing easily processed source text that minimizes the need for post-editing, thus making MT cost-effective, then controlled authoring technology is a must-have. Moving past the “writing for translation guidelines” approach is the way to go here. **The business case for controlled authoring: the big picture** It’s often said that the biggest risks to the introduction of controlled authoring, other than the cost (nontrivial even at the best of times), is the political. The term controlled authoring itself must be found guilty on all counts of contributing to the problem of user acceptance as it conjures up images of mass layoffs, stilted, boring texts, loss of control by authors, inducing an immediate negative reaction, mostly based on understandable fear and ignorance, frequently exacerbated by a belief... that controlled authoring can somehow offer automatic creation of content, and a narrow focus on just localization benefits. There are ways of dealing with these issues too, beyond the scope of this article. In general then, beyond the clear hard-sell on the TM and MT front, localization cost and time-to-market savings, localization teams can emphasize the quality aspects of the source content for native speakers too — superior user experience, consistent terminology, less support calls, improved accessibility and so on. Leverage the global user experience, not just the localized one. It should be pointed out there are controlled authoring solutions for Japanese, German and so on, so do not assume it is an English-only concept now, whatever the origins. Introduction and changing processes: localization's role Introduction of controlled authoring requires a serious management decision as to timing, not least the provision of a significant budget. However, research would indicate that using pilot projects to develop the process as well as achieve maximum buy-in by the stakeholders in the process is key, as well as using training techniques that rely less on computer science and linguistics but more on content development approaches. In general, localization groups might consider the following with faced with the opportunity to introduce controlled authoring: - Identify a localization strategy for TM and MT tools and how controlled authoring business requirements fit into this. - Help kick-start the controlled authoring process of adoption and pilot projects by providing rules and terminology already harvested to the implementers of the technology. - As the creation of rules and terminology are central to controlled authoring and to the impact on localization tools, then it is critical that localization groups remain visible and active as stakeholders in their development and maintenance over time. - Recognize the best kind of texts — large volumes of structured, technical, procedural texts such as software and user assistance strings or online documentation. These texts require a consistent user experience between components. Seeking a controlled authoring solution for a few thousand words of marketing material would not be a strong business case! - Prioritize rules. Decide which ones are more important to you than others. Aim for automatable and therefore measurable ones. For example, a rule called "one strong idea per sentence" is not automatable, whereas repeating the noun instead of switching it for a pronoun or checking for the passive voice is. - Look for leverage points between localization and authoring teams. Many rules for localization maximization are obviously ones that should be applied to text even if never intended for localization in the first place. Other, more "severe" MT rules may not be optimal for the source language depending on the user experience required. For example, text intended for mobile applications may be fragmented, clipped, dropping articles and so on for user experience reasons. Be prepared for compromise. Err on the side of user experience trumping localization unless it’s a complete showstopper. - One particular challenge to the introduction of controlled authoring can come from localization groups themselves — the disruption of TM match rates for previously localized content. This requires careful management. Solutions include the introduction of controlled authoring on new content yet to be localized, phased introductions based on content that is going to change anyway, grandfathering of content that has shown little change over years, or a reassessment as to how a one-time hit on localization assets results in longer term cost savings, time-to-market improvements and quality uptake. - Localization group input to the rule creation process must be matched by an evaluation of the localized source output, too, iteratively maximizing returns through the fine-tuning of rules. An MT pilot makes a fine adjunct to a controlled authoring pilot. Provide content development teams with the feedback, qualitative and quantitative. Acknowledgements Publicly available sources from the following were used in this article: Patrick Cadwell (DCU), Sharon O’Brien (DCU), Jeff Allen (Translations.com), Uwe Muegge (Muegge.cc), Jon Kohl (SAS), Andres Heuberger (ForeignExchange Translations) and Fred Hollowood (Symantec). Creating a Dialogue with the World Our network of 500+ skilled professionals, working in over 50 world languages and numerous areas of expertise, provides you with the precision and accuracy needed in today’s global marketplace. - Interpretation and translation services - Competitive pricing - 50+ languages - Free, no-obligation estimates Tennessee Foreign Language Institute Nashville, Tennessee USA info@tfli.org • www.tfli.org Translation Services into 70 Languages We provide translation services into 70 languages using the most modern technology for clients throughout the whole world. - CEET provides translation, proofreading, localization, DTP, interpreting, voice-over and cultural consulting in all major world languages with special expertise in Central and Eastern European languages. - We approach all projects with respect to customers’ needs and the cultural uniqueness of each country because we believe the language of your firm communicates who you are to the audience. SDL TRADOS Technologies SDL TRADOS Technologies is a division of SDL, the world’s largest provider of technology solutions for global information management (GIM), which benefit corporations and institutions, language service providers and freelance translators worldwide. SDL has over 170,000 software licenses deployed across the translation supply chain and has demonstrated proven ROI in over 500 enterprise solution installations. SDL delivers innovative software products that accelerate global content delivery and maximize language translation productivity. SDL Berkshire, UK www.lspzone.com CEET Ltd. Prague, Czech Republic info@ceet.eu • www.ceet.eu SDL Berkshire, UK www.lspzone.com Clear Words Translations Córdoba, Argentina info@clearwordstranslations.com www.clearwordstranslations.com Clarity and Efficiency With a vast network of professionals worldwide, we provide reliable, customized language solutions in Spanish and Brazilian Portuguese. Our services include localization, translation, interpreting, desktop publishing, and project management solutions that enable our clients to increase revenue and create effective communication channels with their audiences. Our adherence to on-time deliveries, fair pricing and fast turnaround makes us a language service provider our clients can trust. You are kindly invited to find out how you can benefit from our services. Clear Words Translations Córdoba, Argentina info@clearwordstranslations.com www.clearwordstranslations.com Save Translation Cost with HyperSTE HyperSTE is the leading quality assurance software for standardized documentation. Benefits include - Up to 30% in cost savings on translation and localization - Up to 40% in reduced word count - Quality improvement in writing and translations - Quality assurance and quality measurement for content - Up to 30% in reduced product cycle time - Up to 40% reduction in overall documentation cost - Improved safety and customer service - Facilitates DITA, ST1000D, SCORM, CMS and XML Tedopres International, Inc. Austin, Texas USA ste@tedopres.com • www.simplifiedenglish.net TermNet – International Network for Terminology TermNet, the International Network for Terminology, is a forum for companies, associations and universities that engage in terminology. Terminology is considered and promoted by TermNet as an integral and quality assuring part of any product and service in the areas of - information and communication - classification and categorization - translation and localization If you would like to join the international community, please visit www.termnet.org and contribute to our blog. TermNet – International Network for Terminology Vienna, Austria termnet@termnet.org • www.termnet.org Translating Services into 70 Languages We provide translation services into 70 languages using the most modern technology for clients throughout the whole world. - CEET provides translation, proofreading, localization, DTP, interpreting, voice-over and cultural consulting in all major world languages with special expertise in Central and Eastern European languages. - We approach all projects with respect to customers’ needs and the cultural uniqueness of each country because we believe the language of your firm communicates who you are to the audience. SDL Berkshire, UK www.lspzone.com CEET Ltd. Prague, Czech Republic info@ceet.eu • www.ceet.eu Translation Services into 70 Languages We provide translation services into 70 languages using the most modern technology for clients throughout the whole world. - CEET provides translation, proofreading, localization, DTP, interpreting, voice-over and cultural consulting in all major world languages with special expertise in Central and Eastern European languages. - We approach all projects with respect to customers’ needs and the cultural uniqueness of each country because we believe the language of your firm communicates who you are to the audience. SDL Berkshire, UK www.lspzone.com CEET Ltd. Prague, Czech Republic info@ceet.eu • www.ceet.eu Translation Services into 70 Languages We provide translation services into 70 languages using the most modern technology for clients throughout the whole world. - CEET provides translation, proofreading, localization, DTP, interpreting, voice-over and cultural consulting in all major world languages with special expertise in Central and Eastern European languages. - We approach all projects with respect to customers’ needs and the cultural uniqueness of each country because we believe the language of your firm communicates who you are to the audience. SDL Berkshire, UK www.lspzone.com CEET Ltd. Prague, Czech Republic info@ceet.eu • www.ceet.eu Translation Services into 70 Languages We provide translation services into 70 languages using the most modern technology for clients throughout the whole world. - CEET provides translation, proofreading, localization, DTP, interpreting, voice-over and cultural consulting in all major world languages with special expertise in Central and Eastern European languages. - We approach all projects with respect to customers’ needs and the cultural uniqueness of each country because we believe the language of your firm communicates who you are to the audience. SDL Berkshire, UK www.lspzone.com CEET Ltd. Prague, Czech Republic info@ceet.eu • www.ceet.eu BRIDGING THE GAP BETWEEN AUTHOR AND TRANSLATOR Combined with MadCap’s industry-leading authoring and multimedia applications, MadCap offers the most powerful integrated authoring and localization workflow available, allowing for maximum translation reuse, reduced project cycles and costs, and vastly improving time to market. - Streamline and manage the entire translation process regardless of the TM tool being used - New Project Packager takes the guesswork out of what needs to be translated. Receive and send all necessary translation candidates (including multimedia elements) in a single ZIP - Statistical reports show detailed information on translation status including what has/has not been translated, how many words/segments translated, etc. - Seamless import of content created in Flare, DITA, Microsoft Word, RTF, TXT, XHTML, HTML and more - Built-in translation memory and full TMX support - Integrated terminology database - And much more... Download your free 30-day trials NOW! www.madcapsoftware.com | +1 (858) 320-0387 Tthis guide is a component of the magazine *MultiLingual*. The ever-growing easy international access to information, services and goods underscores the importance of language and culture awareness. What issues are involved in reaching an international audience? Are there technologies to help? Who provides services in this area? Where do I start? Savvy people in today’s world use *MultiLingual* to answer these questions and to help them discover what other questions they should be asking. *MultiLingual*’s eight issues a year are filled with news, technical developments and language information for people who are interested in the role of language, technology and translation in our twenty-first-century world. A ninth issue, the Resource Directory and Index, provides listings of companies in the language industry and an index to the previous year’s content. Two issues each year include *Getting Started Guides* such as this one, which are primers for moving into new territories both geographically and professionally. The magazine itself covers a multitude of topics. **Translation** How are translation tools changing the art and science of communicating ideas and information between speakers of different languages? Translators are vital to the development of international and localized software. Those who specialize in technical documents, such as manuals for computer hardware and software, industrial equipment and medical products, use sophisticated tools along with professional expertise to translate complex text clearly and precisely. Translators and people who use translation services track new developments through articles and news items in *MultiLingual*. **Language technology** From multiple keyboard layouts and input methods to Unicode-enabled operating systems, language-specific encodings, systems that recognize your handwriting or your speech in any language — language technology is changing day by day. And this technology is also changing the way in which people communicate on a personal level — changing the requirements for international software and changing how business is done all over the world. *MultiLingual* is your source for the best information and insight into these developments and how they will affect you and your business. **Global web** Every website is a global website, and even a site designed for one country may require several languages to be effective. Experienced web professionals explain how to create a site that works for users everywhere, how to attract those users to your site and how to keep the site current. Whether you use the internet and worldwide web for e-mail, for purchasing services, for promoting your business or for conducting fully international e-commerce, you’ll benefit from the information and ideas in each issue of *MultiLingual*. **Managing content** How do you track all the words and the changes that occur in a multilingual website? How do you know who’s doing what and where? How do you respond to customers and vendors in a prompt manner and in their own languages? The growing and changing field of content management and global management systems (CMS and GMS), customer relations management (CRM) and other management disciplines is increasingly important as systems become more complex. Leaders in the development of these systems explain how they work and how they work together. **Internationalization** Making software ready for the international market requires more than just a good idea. How does an international developer prepare a product for multiple locales? Will the pictures and colors you select for a user interface in France be suitable for users in Brazil? Elements such as date and currency formats sound like simple components, but developers who ignore the many international variants find that their products may be unusable. You’ll find sound ideas and practical help in every issue. **Localization** How can you make your product look and feel as if it were built in another country for users of that language and culture? How do you choose a localization service vendor? Developers and localizers offer their ideas and relate their experiences with practical advice that will save you time and money in your localization projects. **And there’s much more** Authors with in-depth knowledge summarize changes in the language industry and explain its financial side, describe the challenges of computing in various languages, explain and update encoding schemes, and evaluate software and systems. Other articles focus on particular countries or regions; specific languages; translation and localization training programs; the uses of language technology in specific industries — a wide array of current topics from the world of multilingual computing. If you are interested in reaching an international audience in the best way possible, you need to read *MultiLingual*. --- Subscribe to *MultiLingual* at www.multilingual.com/subscribe --- October/November 2009 • www.multilingual.com/gsg
{"Source-Url": "https://multilingual.com/downloads/screenSupp107.pdf", "len_cl100k_base": 16341, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 55175, "total-output-tokens": 16853, "length": "2e13", "weborganizer": {"__label__adult": 0.00110626220703125, "__label__art_design": 0.01316070556640625, "__label__crime_law": 0.0011568069458007812, "__label__education_jobs": 0.08782958984375, "__label__entertainment": 0.0034656524658203125, "__label__fashion_beauty": 0.0006437301635742188, "__label__finance_business": 0.08123779296875, "__label__food_dining": 0.0008387565612792969, "__label__games": 0.004421234130859375, "__label__hardware": 0.0024433135986328125, "__label__health": 0.0009598731994628906, "__label__history": 0.0017185211181640625, "__label__home_hobbies": 0.0010728836059570312, "__label__industrial": 0.0021800994873046875, "__label__literature": 0.024749755859375, "__label__politics": 0.0011844635009765625, "__label__religion": 0.0021038055419921875, "__label__science_tech": 0.08331298828125, "__label__social_life": 0.0008668899536132812, "__label__software": 0.33935546875, "__label__software_dev": 0.343505859375, "__label__sports_fitness": 0.0005636215209960938, "__label__transportation": 0.0011081695556640625, "__label__travel": 0.0010547637939453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79627, 0.01942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79627, 0.13992]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79627, 0.92688]], "google_gemma-3-12b-it_contains_pii": [[0, 291, false], [291, 2114, null], [2114, 7688, null], [7688, 14913, null], [14913, 19548, null], [19548, 23994, null], [23994, 29475, null], [29475, 34840, null], [34840, 40848, null], [40848, 46888, null], [46888, 50986, null], [50986, 57238, null], [57238, 62725, null], [62725, 67154, null], [67154, 73540, null], [73540, 74583, null], [74583, 79627, null]], "google_gemma-3-12b-it_is_public_document": [[0, 291, true], [291, 2114, null], [2114, 7688, null], [7688, 14913, null], [14913, 19548, null], [19548, 23994, null], [23994, 29475, null], [29475, 34840, null], [34840, 40848, null], [40848, 46888, null], [46888, 50986, null], [50986, 57238, null], [57238, 62725, null], [62725, 67154, null], [67154, 73540, null], [73540, 74583, null], [74583, 79627, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79627, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79627, null]], "pdf_page_numbers": [[0, 291, 1], [291, 2114, 2], [2114, 7688, 3], [7688, 14913, 4], [14913, 19548, 5], [19548, 23994, 6], [23994, 29475, 7], [29475, 34840, 8], [34840, 40848, 9], [40848, 46888, 10], [46888, 50986, 11], [50986, 57238, 12], [57238, 62725, 13], [62725, 67154, 14], [67154, 73540, 15], [73540, 74583, 16], [74583, 79627, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79627, 0.10654]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9cc7c2ecfa6ff3287c1db81b178d70955a36fb36
UP202 Designing the Right Portal Infrastructure: Lessons Learned and Examples Olaf Beier, SAP Consulting Thomas Hensel, Product Management Disclaimer This presentation outlines our general product direction and should not be relied on in making a purchase decision. This presentation is not subject to your license agreement or any other agreement with SAP. SAP has no obligation to pursue any course of business outlined in this presentation or to develop or release any functionality mentioned in this presentation. This presentation and SAP's strategy and possible future developments are subject to change and may be changed by SAP at any time for any reason without notice. This document is provided without a warranty of any kind, either express or implied, including but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non-infringement. SAP assumes no responsibility for errors or omissions in this document, except if such damages were caused by SAP intentionally or grossly negligent. Learning Objectives As a result of this workshop, you will be able to: - Understand the fundamental portal implementation scenarios and give some examples for portal implementation scenarios - Name functional and technical factors that have an impact on the target architecture and infrastructure of your portal project - Apply best practices for a portal implementation project The following topics are covered by related TechEd sessions: - UP100, SAP NetWeaver Portal: Roadmap for the Next 12 Months - UP108, Accelerated Application Delivery: Enhancing the Performance of Web Applications - UP110, How SAP Uses the SAP NetWeaver Portal as its Corporate Intranet Site - UP200, SAP NetWeaver UI Strategy and Roadmap - UP263, Changing the Look & Feel of the SAP NetWeaver Portal, Hands-on - LCM102, Running a Sizing Project from Blueprint to Upgrade - LCM219, SAP NetWeaver System Landscapes - LCM224, System Landscape Optimization - LCM263, CTS+: One Transport Management System for Every Purpose - LCM265, Designing a Well-Performing Web Infrastructure for an SAP NetWeaver System 1. Overview 1.1. Portal Implementation Scenarios 1.2. Focus Area Corporate Portals 2. User Productivity Infrastructure 2.1. Portal Deployment Options 2.2. Portal Scaling 3. Building the Portal Infrastructure 3.1. Security Aspects, HA, Scheduled Downtimes 3.2. Sizing, Monitoring, Transporting 3.3. Figures from SAP Corporate Portal 4. Summary 4.1. Summary 4.2. Further Information, Notes, Blogs SAP NetWeaver Portal provides end users a uniform single point of access to - Applications - Services and - Information they need for their daily work. Integrating portal services into your business applications and processes provides a significant increase of productivity in your day-to-day work. Potential Portal Implementation Scenarios Collaboration Portal Project Information Portal Banking Portal Trading Portal Partner Portal Supplier Portal Department Portal Team Portal Consumer Portal Application Portal Corporate Intranet Portal Community Portal Corporate Extranet Portal eCommerce Portal Self-Service Portal External-Facing Portal My Personal Portal Process Portal Clearly define a portal strategy and roadmap in order to justify investments and be able to show how to leverage the investments. Main Scenario: Corporate Portal Corporate portals form a centralized technology platform as a basis for different types of content. B2B-Focus: - Content - Application access - Collaboration B2C-Focus: - Content - Personalization - Service access - Performance B2E-Focus: - KM - Collaboration - Content management - Corporate identity The portal provides various tools and best practices to support any kind of combination of internal as well as external-facing business scenarios: 1. Overview 1.1. Portal Implementation Scenarios 1.2. Focus Area Corporate Portals 2. User Productivity Infrastructure 2.1. Portal Deployment Options 2.2. Portal Scaling 3. Building the Portal Infrastructure 3.1. Security Aspects, HA, Scheduled Downtimes 3.2. Sizing, Monitoring, Transporting 3.3. Figures from SAP Corporate Portal 4. Summary 4.1. Summary 4.2. Further Information, Notes, Blogs Empowering and Connecting People User Productivity Infrastructure Expert User Business User UI Clients & Access Channels - Web Dynpro Islands - Enterprise Search Access - Adobe Forms - NW Business Client - Web Browser - SAP GUI - Duet & Atlantic - Mobile & Voice UI Services - Roles - Navigation - Personalization - Document - Page Building - Collaboration - Search - ... UI Infrastructure - Portal Runtime - Web Dynpro - Design Time Tools Empowering and Connecting People User Productivity Infrastructure Details in UP100 Is There a Need to Run More Than One Portal? In complex environments there could be the need to operate more than one portal for different reasons: **Business driven** **Business Autonomy** - Organizational units want to have their own portal (e.g. for testing, sensitivity) - Organizational / legal requirements (e.g. portal per org unit, department, project) - Sharing a portal across multiple customers (service providers) **Geographical Distribution** **Service Level Agreements** - Performance: expected response times - Availability: 24x7 - Risk: critical vs. non-critical applications - Tracking and Reporting **Corporate Governance and Guidelines** **Technology Driven** **Platform & Release** - Release version & lifecycle (SP Update) - Hardware, operating system - System landscape (dev, test, prod) - Connections between systems **Security & Policies** - Storage of data and user information - Access permissions - Administration: Configuration, Operations, Monitoring **Technical dependencies** - Release dependencies between applications and portal (e.g. BI, xApps, CE, XRPM, Collaboration Portal, etc.) Portal Deployment Options (Portal Systems View) <table> <thead> <tr> <th>Approach</th> <th>Benefit</th> </tr> </thead> <tbody> <tr> <td>Single Central Portal (1 portal)</td> <td>- Integrating all applications, services and information into one central portal&lt;br&gt; - Centrally governed and administrated portal&lt;br&gt; - Simple landscape setup</td> </tr> <tr> <td>Federated Portal Network (2 .. n portals)</td> <td>- Using FPN mechanism for sharing certain content between multiple portals&lt;br&gt; - Central access to content via consumer portal&lt;br&gt; - Autonomous sub-portals&lt;br&gt; - Independent administration (e.g. release version)</td> </tr> <tr> <td>Separate Portals (2 .. n portals)</td> <td>- Installation of autonomous portals for dedicated scenarios&lt;br&gt; - Full flexibility in administration (e.g. release version)&lt;br&gt; - Avoid any dependencies or impacts (security aspects)</td> </tr> </tbody> </table> Details in UP217 Logical Separation in a Central Portal End User / Runtime - Roles-based access - Themes / Desktops: - Access Different Backends - Dynamic System Resolution: - Destination Mapping: Administration - Delegated Administration - User Administration - Companies: define sets of users for delegated user administration - Content Administration – Define Permissions on PCD level - Namespace prefix: clearly identify objects and assign them to a certain organizational unit - Security Zones: control access to portal components and services in the portal SAP NetWeaver enables you to flexibly scale your portal system across multiple physical hosts or logical system instances: - **Logical (NW Systems / Installation) – vertical scaling** - Portal System - Portal Instance - Server Process - **Physical (hosts) – horizontal scaling** Release: SAP NetWeaver 7.0 - Benefits: - Full enterprise portal capabilities - Flexible installation options - Reliable and stable platform - Installation Options: Enterprise Portal (EP) - Knowledge Management - Collaboration - Visual Composer - Composite Application Framework - .NET PDK (optional) EP Core (EPC) - Portal - Universal Worklist - Guided Procedures Application Server Java Release: SAP NetWeaver CE 7.1 - Benefits: - Lean portal platform - Latest technology standards - Provides new capabilities - Installation Options: Composition Platform - Portal - Guided Procedures - Visual Composer - Composite Application Framework Application Server Java 1. Overview 1.1. Portal Implementation Scenarios 1.2. Focus Area Corporate Portals 2. User Productivity Infrastructure 2.1. Portal Deployment Options 2.2. Portal Scaling 3. Building the Portal Infrastructure 3.1. Security Aspects, HA, Scheduled Downtimes 3.2. Sizing, Monitoring, Transporting 3.3. Figures from SAP Corporate Portal 4. Summary 4.1. Summary 4.2. Further Information, Notes, Blogs Guiding Principles 1. Start small, grow over time 2. Always use a top down approach - refining the details in the next iteration 3. Try to drive Portal projects with business acumen and not simply as a infrastructure project 4. Try to organize your projects using a pipeline with short/mid/long term targets 5. Treat building the infrastructures as a long term approach that needs multiple recursions and variations 6. Document your decisions and configuration for later troubleshooting and QA 7. Obtain the Security department's approval Let's try to use these principles in the following Business Example Example Business Scenario The company ITelO offers various services to different user groups: - **Anonymous internet user or customers** - Public information about the products, company news and events - Lightweight internet shop applications for selling products - Subscriptions to newsletter - Download areas for manuals or software - **Registered partners and suppliers** - Detailed information about products, contacts, - Availability status of purchase orders - Presenting and processing invoices/bills electronically for direct invoicing - Collaboration project workspaces and other B-2-B scenarios - **Employees** - Corporate information such as news & events, address book, corporate policies and guidelines, strategy and how-to papers - Employee Self-Services for maintaining personal data, planning trips, … - Reporting and approval workflows - Collaboration rooms for colleagues working in worldwide projects How to approach? Start using the top down approach to collect the required information - Which building blocks already exist - Collect application information - Compile matrix with release dependencies - Start at board level - Try to get a logical big picture - Refine the big picture - Define the system landscape tracks - Draw a system landscape matrix - Use the tracks as import for your transport and change management processes - Define HA strategy - Define Firewall/LoadBalancer/ReverseProxy configuration and rules - Create Hardware Procurement lists - Keep track of SSL certificate ordering - Keep track of needed licenses - Put everything together in one document - Approval process - Standard vs. Custom coding Refine the Big Picture Define the System Landscape Tracks **Portal System Landscape Tracks** **Development** - DPX - spdeva206 - sp - Oracle **Quality Assurance** - QPX - spdeva206 - QP - Oracle **Production** - PFX - spdeva206 - XP - Oracle **External Track** - DPX - spdeva206 - XP - Oracle **Internal Track** - QPX - spdeva206 - QP - Oracle **Biller Direct System Landscape Tracks** **Development** - DBX - DB **Quality Assurance** - PBX - DB **Production** - PBX - DB **IPC System Landscape Tracks** **Development** - DCX - DC **Quality Assurance** - QCX - DC **Production** - QCX - DC **Internet Sales System Landscape Tracks** **Development** - DX - DX **Quality Assurance** - QX - DX **Production** - QX - DX **Portal System Landscape Tracks** (continued) **Owner:** SDS **Development Quality Assurance** **Production** **Owner:** SDS Put Everything Together in One Document (Example) Sample Impacts of Technical Decisions SSL termination - We decided to terminate SSL at the first Load Balancer in the datacenter for performance and intrusion detection reasons - This requires that all integrated applications support this setup properly - Issues encountered - some LoadBalancer cannot set HTTP header ClientProtocol with each request needed to be work-arounded in Web Server filter - one old application required changing hardcoded protocol strings in BSPs - overhead of correct certificate handling and Content Switch configurations was difficult for project team Release Dependencies - Check SAP Notes and Installation Master Guides to investigate forward and backward dependencies of all used components - Quite often e.g. Business Packages are tightly bundled to the release of the ERP system, while a lot of exceptions exist, which need to be checked case by case - Issues encountered - It was possible to use Biller Direct 6.0 together with 4.7 based ERP, which was supposed to be upgraded soon Summary and Lessons Learned - It’s a long way to get everything in place - Focus on the big picture, don’t be distracted by inflated small issues - Exchange with experienced colleagues or use the various communities to leverage other projects’ experiences (e.g. SDN Forums) - Don’t get stuck in the architecture decision „loop“ - Be pragmatic, you only have certain time and budget to finish - Document everything that is important, be consequent on this - Plan into the future, if things also can be done in another way, your project might go for this sooner or later Customer Portal Business case: public website for internet users, customers and partner (including area for registered users with access to special content and applications) Applications and Content: - Mostly static HTML-content (e.g. facilitate by Web Page Composer and KM framework) - Lightweight application such as CRM internet sales (Business Package for SAP CRM) - Integration of third party content can be provided through WSRP (external portal content) Network Infrastructure: - Secure infrastructure (use multiple network zones, use application gateway to protect the portal and applications) - Only very restricted access to the internal backend system of inner DMZ if needed via RFC Portal Infrastructure: - Switchover cluster needed for high availability setup - Usage types EP Core and EP to leverage full enterprise portal capabilities including KMC - High scalability setup could leverage adaptive computing technology - User management in LDAP available in DMZ - Load balancer needed for workload distribution between system instances - Usage of an application gateway/reverse proxy for securing internet access CRM: The example illustrates the integration of various CRM applications using a Central Portal Scenario using the CRM Business Package. **Self-Services Portal** **Business case:** Self-Services Portal providing access to self-services from the SAP ERP back end system **Applications and Content:** - SAP ERP Business Packages (e.g. Employee or Manager Self-Service) - Other applications (e.g. Web Dynpro based) for ordering equipment, booking rooms or doing travel arrangements from within the corporate portal **Network Infrastructure:** - Accessible from the Intranet - Accessible for certain functionality only via RFC/SNC (e.g. Web Dynpro applications, ITS-scenarios) from the customer portal **Portal Infrastructure:** - Usage types AS Java + EP Core + relevant business content from ERP - User management → ERP system (which is synchronized with Directory Server via Transaction LDAP) - Integrated into the corporate portal by means of FPN - Keep the content administration within XSS-Portal - No release dependencies between XSS-scenarios and corporate portal **Example HCM/HR Integration** **HCM/HR:** Integration of ESS Travel Expense Management exists in two versions. Depending on the version of the Business Package used (Web Dynpro Java vs. Web Dynpro ABAP) – there are different scenarios existing. Composition Environment Portal Business case: Portal for running composite applications Applications and Content: - Composite Applications (using Java 5 EE or Web Dynpro technology) build with Composition Environment: - Visual Composer based modeling - SAP NetWeaver Developer Studio Network Infrastructure: - Only accessible from the intranet (no internet connection allowed) - Access to relevant backend system that provide enterprise services for the composition tools Portal Infrastructure: - Composition Environment: installation option “Composition Tools” including the portal platform (no KM or Collaboration available) - User management → Directory Server (e.g. ADS) - Composite applications integrated into the corporate portal by means of FPN (CE serves as runtime for the composite applications) CE: Integration of e.g. Visual Composer based iViews created in SAP NetWeaver Composition Environment is integrated via a FPN scenario to SAP NetWeaver Portal 7.0 **Reporting Portal** **Business case:** Portal for managing and performing reporting activities to avoid high load on the central portal **Applications and Content:** - Business Intelligence web reporting - Information Broadcasting **Network Infrastructure:** - Only accessible from the intranet (no internet connection allowed) **Portal Infrastructure:** - Usage types AS ABAP + AS Java + EPC/EP + BI-Java and BI - User management ABAP: synchronization of user data with ADS (transaction LDAP) - Login only allowed via SSO: end users will not get passwords for the BI system - Integrated into the corporate portal by means of FPN - Keep the content administration within BI Java Portal - Dependencies between BI-Java and corporate portal: due to 1:1 relationship every additional BI-Java front end needs a separate portal Example BI Integration **BI:** Using BI Reports in a BI Portal scenario can be done by accessing the BI Portal directly or by integrating the reports using a FPN scenario in combination with a Central SAP NetWeaver Portal 7.0. Corporate Portal Business case: Central corporate portal for all employees Applications and Content: - Managed content for corporate news, articles, department sites using Web Page Composer - Document management for providing downloads and services such as subscriptions - Collaboration rooms for sharing knowledge and collaboratively work on documents - Approval Workflows via Universal Worklist - SAP Transactions for information workers/non-power users - Self-Services scenarios for all employees - Access to BI Web reporting for special user groups such as managers Network Infrastructure: - Accessible only from the Intranet via secured connection or by means of VPN/WTS - Connect remote locations via Web Accelerator solution: “Application Delivery over WAN” Portal Infrastructure: - AS Java + EPC/EP to get full enterprise portal capacities (portal, KM, collaboration, UWL) - User management: Directory Server (e.g. ADS) - Authentication for the end user via SPNego (Kerberos) “Windows SSO” - BI-Java and Self Services integrated via FPN (using Remote Roles Assignment) - HA infrastructure with switchover solution and continuous availability concept to reduce planned downtimes during maintenance windows (shadow system: clone-update-rollback) - Load balancer needed for workload distribution between instances - TRex installation shared with BI reporting portal Building a Portal Infrastructure Where should the following components be located? **Generally** - If there is no direct interaction between a web client and the application server (e.g. a SAP Backend that is called via JCO by a portal iView) keep it in the “high security area” - Web applications that are called by the client should reside in the “Inner DMZ” **Database Server** - Located “close” to the respective SAP NetWeaver AS to optimize: - performance - session stability - latency - Install the database in the same network zone as the application server **LDAP directory** - For external users: within the DMZ - For internal users (or in case of unique user persistence): in the backend (since it is used by other applications also; e.g. ADS) Please distinguish Operating System users and Java AS-users Where should the following components be located? - **TRex** - As the TRex only interacts with a server in the DMZ (e.g. Portal/KMC or ISA) it can be considered a backend server and therefore located in the high security area. - **KM-Repositories** - CM-Repository is normally located in the database (e.g. setting “db only”) - Other repositories: depends on the repository type and the access that is provided (could be located in high security area or Inner DMZ) - **ITS / SAPGUI for HTML (aka WebGUI) / BSP-Applications / BEx-Web-Applications** - Likely to be accessed directly by the client (exception BW-FullProxyMode) - Due to backend nature should reside in the high security area - May need additional gateway in the Inner DMZ (e.g. SAP Web Dispatcher, Reverse Proxy) - In case of non integrated ITS WGate should be located in Inner DMZ, AGate in high security area - **Application Gateway / Load balancer** - Load balancing between Application Gateways - Application Gateway to protect the Load Balancer - Depends on scenario specifics (e.g. Sizing of Application Gateway) - Typically Application Gateway protects Load Balancer - **Application-specifics** - Check requirements for additional components that might be needed for the respective business scenario (e.g. CRM-ISA, HR-eRecruitment, LAC etc.) Potential Complications Infrastructure complexity increases if - Applications are accessed by employees from various locations - Regional subsidiaries - Global subsidiaries - Kiosk access - Connection via WAN, dial-up, satellite - Applications are accessed by customers, partners, suppliers, … - Dial-up connection - Browser requirements - … Increased complexity may require usage of additional infrastructure components - WAN acceleration products like SAP Application Delivery over WAN - Web Cache/Web Proxy - Terminal Server - Virtual Private Network Hardware Infrastructure Some common hardware infrastructure - **Firewalls** - Security for access control, user authentication, and network and application-level attack protection - **Web Appliances** - Scalable approach to accelerating application performance, increasing WAN capacity, and enabling application prioritization and visibility - **Load Balancers** - Provide means to scale your application infrastructure and facilitate HA and switchover solutions by distributing load to clusters of servers - **Application Gateways** - Protect applications from direct access by clients - Can also provide performance improvements when used in combination with caching General Thoughts on Security How to make this landscape “secure”? - No two companies are the same: answers range from everything must be encrypted to nothing is secured (totally customer specific security requirements) - The usage of different network zones is strongly recommended - The “first line of defense” is likely the most crucial component - Usage of an Application Gateway recommended (could be anything from Apache up to highly sophisticated hardware solution) - Only expose what is really needed for the business application (e.g. opening port, positioning servers in the infrastructure) - Without proper monitoring and operation there is only limited security - “Security by obscurity” is not sufficient (e.g. switch between unsecured protocols, usage of “hidden”-URLs) - The infrastructure is important – but application layer security is crucial (password policy, security zones, role-concept, ACL’s) - Establish secure connections via SSL between the different components (e.g. using Login Modules / SSO Trust relations ships) Portal Security Considerations As the SAP NetWeaver Portal runs on SAP NetWeaver AS Java, make sure you have followed the suggestions: - Modify all access restrictions to allow required but minimal access only - Apply all available and recommended patches regularly for all components used in SAP NetWeaver - Modify the portal permissions for iViews and security zones to provide users with exactly the permissions they require and not more - In a portal installation that will be used productively, remove all iViews that are not required (using reports of support platform) - Delegate administration tasks among several users - Disable user self-registration if not required - Perform comprehensive security assessment following your specific secure programming guide (especially custom-built applications) - Create awareness for secure behavior Ensuring Data Security for Collaboration Rooms Security aspects for External-Facing collaboration rooms - External users can have access to user data and collaboration services - Ensure to which extent user data is accessible and which collaboration services are displayed (via role concepts, permissions, ACLs) - Perform regularly a proper monitoring for identifying usage / attacks - Allow only certain mime types for upload (configurable) - Deactivate unnecessary KM and collaboration services Differentiating the display of user data - SAP supplies an extension that can be activated and configured. The configuration settings (e.g. People Rendered Profile) apply to the following: - Search for users, groups and roles - Display of users, groups and roles - Access to collaboration services - Sending e-mails from within the portal High Availability (HA) Solutions for SAP NetWeaver AS Depending on the capabilities of the different systems/applications used, the optimum HA setup may be different. SAP NetWeaver AS: Architectural (Potential) Single Points of Failure 1. Central Services 2. Central Database 3. Load balancer and other Web Infrastructure Components Besides these architectural SPOFs, the central file share ("/sapmnt/...") represents also a SPOF from a technical (installation) point of view. DB and SCS, Each in Its Own Switchover-Group, CI Outside the Switchover Environment Java-CI installed outside the switch-over environment ENQ and MSG Server are separately installed within its own switch-over group Enqueue Replication Server active on ‘passive’ host The database has its own switch-over environment and switch-over group Planned/Scheduled Downtime High frequency Weekly - Offline backups with split-mirror - Kernel upgrades - Profile parameter changes Monthly - Transports Quarterly - Support packages - End of daylight saving time - Database reorganizations - Release upgrades Yearly - Offline backups without split-mirror Low frequency Minutes - Short duration - Long duration Frequency: - Minutes 0,5 … 2 hours - 10…15 hours Duration: - Weekly - Monthly - Quarterly - Yearly Portal Sizing & Performance – Overview The portal on SAP NetWeaver AS Java has specific requirements for sizing, performance, scalability across multiple servers and load-balancing. In a complex infrastructure there are different components besides the portal that may influence the performance: - Backend systems and databases - Network, firewalls, router / dispatcher, etc. Concrete portal sizing recommendations depend on - Number of users (named / anonymous) - User types (active, concurrent, …) - User activities (navigation steps per time unit) - Amount and structure of (customer-specific) content (HTML, GUI, …) Sizing Guide on SAP Service Marketplace “Sizing SAP NetWeaver Portal” General information – QuickLink /sizing, /benchmark and /performance Example for Sizing Process 1) Initial Sizing with SAP QuickSizer - QuickSizer as tool for initial sizing delivers SAPS number as result and input for hardware vendors (http://service.sap.com/quicksizer) - Providing first suggestions for hardware budgeting & planning 2) Configuration and landscaping - Setup of the infrastructure and configuration of systems / server 3) Expert Sizing: Customer Performance Tests - Stress Tests, Performance Load Tests - Detect about 80% of all larger performance issues in test systems - Recommended before “Going Live” 4) Re-Sizing / Optimization - Re-sizing due to further portal implementations (e.g. new Business Packages and customer-specific development of new applications) Different Times, Different Phases, Different Goals of Sizing Sizing takes place in different phases of a project - Very early to plan hardware expenditures - A few months before live start to verify assumptions - Determine the overall performance requirements - During production stages to ensure operations and verify/adjust estimations made earlier. “Trigger events” include: - Upgrade database, operating system, SAP application - Reconfigure system landscape - Change business process - Rollouts: more users or other load ### Possible Definitions for Different Types of Sizing <table> <thead> <tr> <th>Hardware Budget Sizing</th> <th>Advanced Sizing</th> <th>Expert Sizing</th> </tr> </thead> <tbody> <tr> <td>Smaller companies</td> <td>Medium to large companies</td> <td>Large or complex projects</td> </tr> <tr> <td>Very simple algorithms</td> <td>Throughput estimates</td> <td>Additional guidelines</td> </tr> <tr> <td>Assumptions, likelihoods</td> <td>Questionnaires, formulas</td> <td>Custom calculations</td> </tr> <tr> <td>Level setting of project</td> <td>Usage of standard tools</td> <td>Analysis of custom coding</td> </tr> <tr> <td>Risk identification</td> <td>Focus on core business processes</td> <td>Custom sizing guidelines</td> </tr> </tbody> </table> ### Initial Sizings - **Re-Sizing** - All projects - SAP system monitors - Goal: Extend an existing system by load - E.g. by volume 100 additional users who’ll do the same as the current productive ones - **Delta Sizing** - All projects - SAP system monitors - Goal: Extend an existing system by functions - By different functions, e.g. you are live with CRM and want to add SCM - **Upgrade Sizing** - All projects - SAP system monitors - SAP Notes - Goal: Upgrade SAP software ### Production Sizings – whenever there is a change in throughput, sizing must be done ### Some Factors That Influence Sizing <table> <thead> <tr> <th>Impacts on sizing</th> <th>HW Platform</th> <th>SAP Software</th> <th>System Settings</th> <th>Customizing</th> </tr> </thead> <tbody> <tr> <td></td> <td>Processor technology</td> <td>Release</td> <td>Parameterization</td> <td>Set up of business processes</td> </tr> <tr> <td></td> <td>Disk technology</td> <td>OLTP or OLAP</td> <td>Interfaces</td> <td>Organizational structures</td> </tr> <tr> <td></td> <td>Network technology</td> <td>Industry solutions</td> <td>Security settings</td> <td>Business process design</td> </tr> <tr> <td></td> <td>System infrastructure</td> <td></td> <td>Unicode</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td>A2A, B2B scenario</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Customer profile</th> <th>Custom Coding, 3rd Party</th> <th>Data Volume</th> <th>Disk Growth</th> <th>User Behavior</th> </tr> </thead> <tbody> <tr> <td></td> <td>Performance impact</td> <td>Time frame for high volume processing</td> <td>Avoiding data</td> <td>Concurrency</td> </tr> <tr> <td></td> <td>Scalable</td> <td>Background processing, parallel jobs</td> <td>Archiving strategies</td> <td>LAN/WAN</td> </tr> <tr> <td></td> <td>Business process design</td> <td>Reporting</td> <td>Information Lifecycle Mgmt.</td> <td>Internet/intranet</td> </tr> <tr> <td></td> <td></td> <td>Data distribution</td> <td></td> <td>Activity, e.g.</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>*-Search</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Efficient navigation</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Responsibility of</th> <th>Technology Partner</th> <th>SAP</th> <th>Customer</th> </tr> </thead> </table> © SAP 2008 / SAP TechEd 08 / UP202 Page 58 Using the Quick Sizer – Input Parameters ![Quick Sizer interface with input parameters and tables showing active users for Enterprise Portal and Enterprise Portal Logon. - **Project OSE01** - Work days: 320 - Status: In progress - Owner: [Field] - Method: [Field] - **Messages** - The data for PORTAL were saved. - **Table 1: Active Users - Enterprise Portal** - **Element**: NX4WP-ESS - **API**: A - **TI**: S - **Consider**: 125 - **Think time**: 300 - **Element**: NX4WP-NNT - **API**: A - **TI**: S - **Consider**: 1,800 - **Think time**: 211 - **Java IV**: 1 - **URL IV**: 2 - **% RNC**: 50 - **Short text**: Information Browser scenario - bandwidth of different use cases calculated - **Table 2: Active Users - Enterprise Portal Logon** - **Element**: NX4WP-LOGO - **API**: A - **TI**: H - **Max no. of logons**: 6,800 **Comment** The load of the Web Dynpro scenario itself has to be calculated as described in the additional guidelines. In this example, 1500 scenarios per hour averaging 10 data steps will add another 450 SAPS to the results of the pure portal scenario if the Web Dynpro runtime is located on the portal servers. Solution Manager – Tool to Manage Entire SAP Solution Landscape - The SAP Solution Manager is a platform that provides the integrated content, tools, and methodologies that you need to implement, support, operate and monitor your enterprise's solutions from SAP. - With SAP Solution Manager, companies can minimize risk and increase the reliability of their IT solutions. - SAP Solution Manager helps reduce TCO throughout the solution life cycle. - SAP Solution Manager helps companies manage their core business processes and link business processes to the underlying IT infrastructure. Solution Manager Diagnostics - Diagnostic capabilities for support of SAP NetWeaver platform (especially Java-components) - Root Cause Analyses - OS and DB Monitoring - Configuration Tracking - Component versions- and software-change reporting - HTTP-Analysis - … Managing Portal Infrastructure SAP Solution Manager Diagnostics - Root cause analysis - End to end exceptions - Viewing recent changes SAP NetWeaver Administrator - Viewing logs and traces - Viewing configuration CA Wily Introscope - Long running DB queries - Memory issues - Backend system connections Details in LCM273 Transport Management for Portal Content Three methods offered by SAP - **Change and Transport System (CTS)** – for ABAP and Java content; CTS+ (enhancements based on SAP NetWeaver 7.0 SPS 13) - provides transport logistics for portal content: par, ear, sca and sda-files can be transported and deployed - can easily be used from within existing portal landscapes - CTS+ is THE transport Tool at SAP for both worlds, ABAP and Java - **Export/Import Mechanism** – for portal content (epa or XML-file) - Transport package contains coding or portal content only - **SAP NetWeaver Development Infrastructure (NWDI)** – for Java content Through use of the tools and manual process, implement a coherent transport management strategy One Transport Order Development Landscape Quality Landscape Production Landscape Transport Deploy check in ABAP Workbench SE80 Exchange Infrastructure Integration Builder Developer Studio and NWDI Enterprise Portal Content Administrator ... (open Interface for non-ABAP objects) TPZ SCA EPA Change and Transport System Quality Component 1 Production Component 1 Quality Component n Production Component n Transport of: - **Java-based and J2EE-based objects** - Software Component Archives (SCAs) - Software Deployment Archives (SDAs) - Enterprise Application Archives (EARs) - **Portal-based objects** - Enterprise Portal Archives (EPAs) - Portal Application Archives (PARs) - Knowledge Management objects (KM Content and KM Configurations) *(SPS14)* - **PI/XI-based objects** - Integration Builder Objects (TPZs) - **SLD Content** *(SPS13)* - any Files (.doc, .xls, .xml, …) Deployment Options: - SDM - XI - SLD - FS Transporting Non-ABAP Changes Legend - dashed line: logical transport route of non-ABAP objects - solid line: physical transport route of non-ABAP objects - orange line: check-in/check-out of non-ABAP objects - solid line: transport route of ABAP objects SAP NetWeaver Application Server CTS+ ABAP Transport Controller -> Virtual QAS -> Virtual PRD Java DEV -> Java QAS -> Java PRD New System Type: Virtual Non-ABAP System Transport parameter contain deploy options 1. Overview 1.1. Portal Implementation Scenarios 1.2. Focus Area Corporate Portals 2. User Productivity Infrastructure 2.1. Portal Deployment Options 2.2. Portal Scaling 3. Building the Portal Infrastructure 3.1. Security Aspects, HA, Scheduled Downtimes 3.2. Sizing, Monitoring, Transporting **3.3. Figures from SAP Corporate Portal** 4. Summary 4.1. Summary 4.2. Further Information, Notes, Blogs SAP Corporate Portal Key Facts - 60,000 end-users - Available in 70 countries - 500,000 documents in managed content, 1,000,000 documents in collaboration rooms - 35,000 managed web pages - Penetration rate: 99.6% of potential users - 25+ Backend systems integrated: - SAP ERP (HR/FI) - Business Suite (CRM/RPM) - NetWeaver BI/XI - Legacy/3rd Party - Process Integration - Sales/Marketing - Manager & Employee Self Services - Executive/Management Reporting - Content Publishing environment: - Document upload (KM) - Online web page editing (Custom online web editing tool – WCMS) - 80+ Workflows - Community Tools: - Virtual workspaces for Teams & Projects - Discussion forums - Wiki - Podcasts … another evidence that SAP runs SAP 6 Tier Design determined to be best practice allowing: - Rock solid release cycles - Flexibility on continuous improvements - Quality assurance Transports are categorized by analyzing their type and impact to the system. - Transports have different set of testing criteria based on category. - Transports have different release schedule based on category. Transport Types: <table> <thead> <tr> <th>Category</th> <th>What is it?</th> <th>Release Schedule</th> <th>Testing Required</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>New applications</td> <td>Scheduled dates 6 x per year</td> <td>✓</td> </tr> <tr> <td></td> <td></td> <td></td> <td>✓</td> </tr> <tr> <td></td> <td></td> <td></td> <td>✓</td> </tr> <tr> <td>2</td> <td>Bug fixes / Minor application updates</td> <td>Weekly</td> <td>✓</td> </tr> <tr> <td></td> <td>Any transport that requires restart</td> <td></td> <td>✓</td> </tr> <tr> <td>3</td> <td>Application maintenance</td> <td>Weekly</td> <td>✓</td> </tr> <tr> <td></td> <td></td> <td></td> <td>✓</td> </tr> <tr> <td>4</td> <td>Content only</td> <td>Weekly</td> <td>✓</td> </tr> </tbody> </table> * Restrictions exist based on critical business support. (e.g. Quarter End Close) 1. Overview 1.1. Portal Implementation Scenarios 1.2. Focus Area Corporate Portals 2. User Productivity Infrastructure 2.1. Portal Deployment Options 2.2. Portal Scaling 3. Building the Portal Infrastructure 3.1. Security Aspects, HA, Scheduled Downtimes 3.2. Sizing, Monitoring, Transporting 3.3. Figures from SAP Corporate Portal 4. Summary 4.1. Summary 4.2. Further Information, Notes, Blogs Summary - **Uniqueness** – Every portal project deals with totally customer-specific scenarios and requirements. - **SAP Support** – SAP provides various capabilities, tools and services to support your specific business scenarios. - **Flexibility** – You can use different building blocks to enhanced your infrastructure smoothly step-by-step. - **Strategy** – A well defined platform and portal strategy is the basis for all project activities. - **Project planning** – A proper preparation phases is the key to a successful implementation. - **Scope** – Portal projects typically cover real cross-enterprise topics that might span departments and functional roles. - **Alignment** – You need to talk to various persons to align all the different topics and requirements. Summary - **Content** – Content is what matters to the users – not technology. - **User** – Your users have special preferences in terms of intuitive navigation, usability and content they expect to find in the portal. - **Value** – A portal project can only be successful if it delivers significant value to the end users. That is the reason why a portal is much more than just a trendy GUI. - **Subsequent projects** – A portal often acts as originator for a number of other use cases. SAP NetWeaver projects trigger subsequent implementation projects. - **Knowledge** – A SAP NetWeaver project requires a wide range of skills and knowledge within and outside of the project team. - **Business Case** – A solid business case is vital to any SAP NetWeaver project! Further Information SAP Public Web: SAP Developer Network (SDN): http://www.sdn.sap.com/ General Portal Information: https://www.sdn.sap.com/irj/sdn/netweaver Portal on SDN: http://www.sdn.sap.com/irj/sdn/nw-portalandcollaboration Search for SAP Notes: http://service.sap.com/notes Product Availability Matrix http://service.sap.com/pam SP Stack Schedule http://service.sap.com/sp-stacks CTS+: https://www.sdn.sap.com/irj/sdn/cts http://www.sap.com/platform/netweaver/itpractices/userproductivity.epx Related SAP Education and Certification Opportunities http://www.sap.com/education/ Further Information Technical Documentation: - High Availability: http://service.sap.com/HA Further Information Technical Documentation: - Minimizing Effects of Planned Downtime - How To guide "Optimizing Network Traffic in EP 6.0“ available in http://service.sap.com/nw-howtouides - SAP NW Support Platform http://help.sap.com/saphelp_nw70/helpdata/en/43/0f55d0a1c52ba8e10000000a1553f6/frameset.htm - Portal Finetuning: http://service.sap.com/~sapidb/011000358700001480992005E.PDF - KMC in EFP https://www.sdn.sap.com/irj/irj/doc/library/uuid/60fd3fc2-e0a4-2910-80bc-a45987574922 - SAP Best Practices for Portals V1.70 http://help.sap.com/content/bestpractices/crossindustry/bestp_based_netweaver.htm - SAP Web Dispatcher and SSL - Release Notes http://help.sap.com/saphelp_nw70/helpdata/en/57/a21f407b402402e10000000a1550b0/frameset.htm - ASAP Methodology https://service.sap.com/roadmaps - Restriction notes: 853509, 916545 Notes - 916545 - Central Note for External-Facing Portal (NW04s) - 877188 - Central Note for External-Facing Portal (NW04) - 837898 - CM >= NW04 SPS12: How to configure anonymous CM access - 913367 - Anonymous users unable to open specific pages - 870247 - Using named anonymous users - 933452 - External-Facing Portal and Search Engine Indexing - 893855 - EFP -hotfix for suppport of Quick links for anonymous user Blogs - Nuts and Bolts of the External Facing Portal - EFP: Navigation and Framework Tag Libraries - EFP: Layout Tag Library - EFP: Navigation Caching - EFP: Short URLs - EFP: Quick Links - Short(ening) Portal URLs - Changes in the Navigation Cache - Multilingual External Facing Portal with Different Contents Thank you! Fuel your Career with SAP Certification What the industry is saying - “Teams with certified architects and developers deliver projects on specification, on time, and on budget more often than other teams.” 2008 IDC Certification Analysis - “82% of hiring managers use certification as a hiring criteria.” 2008 SAP Client Survey - “SAP Certified Application Professional status is proof of quality, and that’s what matters most to customers.”* Conny Dahlgren, SAP Certified Professional Take advantage of the enhanced, expanded and multi tier certifications from SAP today! Please complete your session evaluation. Be courteous — deposit your trash, and do not take the handouts for the following session. Thank You!
{"Source-Url": "http://courseresources.mit.usf.edu/bus/ism6156/articulate/m4p1_infra/presentation_content/external_files/UP202.pdf", "len_cl100k_base": 10215, "olmocr-version": "0.1.49", "pdf-total-pages": 77, "total-fallback-pages": 0, "total-input-tokens": 115988, "total-output-tokens": 13678, "length": "2e13", "weborganizer": {"__label__adult": 0.0007767677307128906, "__label__art_design": 0.005168914794921875, "__label__crime_law": 0.0007567405700683594, "__label__education_jobs": 0.07489013671875, "__label__entertainment": 0.0008726119995117188, "__label__fashion_beauty": 0.0005331039428710938, "__label__finance_business": 0.059326171875, "__label__food_dining": 0.00060272216796875, "__label__games": 0.0016050338745117188, "__label__hardware": 0.0035152435302734375, "__label__health": 0.0005664825439453125, "__label__history": 0.000942230224609375, "__label__home_hobbies": 0.0007104873657226562, "__label__industrial": 0.004810333251953125, "__label__literature": 0.000957965850830078, "__label__politics": 0.0005412101745605469, "__label__religion": 0.0009312629699707032, "__label__science_tech": 0.050750732421875, "__label__social_life": 0.0005049705505371094, "__label__software": 0.15576171875, "__label__software_dev": 0.6328125, "__label__sports_fitness": 0.0003910064697265625, "__label__transportation": 0.001468658447265625, "__label__travel": 0.0006546974182128906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46166, 0.0189]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46166, 0.02278]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46166, 0.81264]], "google_gemma-3-12b-it_contains_pii": [[0, 141, false], [141, 1045, null], [1045, 2131, null], [2131, 2553, null], [2553, 2853, null], [2853, 3262, null], [3262, 3392, null], [3392, 3812, null], [3812, 3959, null], [3959, 4384, null], [4384, 4913, null], [4913, 6040, null], [6040, 6836, null], [6836, 7392, null], [7392, 7678, null], [7678, 8391, null], [8391, 8816, null], [8816, 9425, null], [9425, 10373, null], [10373, 11097, null], [11097, 11097, null], [11097, 11097, null], [11097, 11120, null], [11120, 12010, null], [12010, 12060, null], [12060, 13090, null], [13090, 13660, null], [13660, 13660, null], [13660, 14792, null], [14792, 14929, null], [14929, 15868, null], [15868, 16115, null], [16115, 16930, null], [16930, 17093, null], [17093, 17924, null], [17924, 18152, null], [18152, 19527, null], [19527, 19527, null], [19527, 19560, null], [19560, 20351, null], [20351, 21696, null], [21696, 22268, null], [22268, 22952, null], [22952, 24009, null], [24009, 24859, null], [24859, 25707, null], [25707, 25875, null], [25875, 26187, null], [26187, 26529, null], [26529, 26995, null], [26995, 27759, null], [27759, 28499, null], [28499, 29036, null], [29036, 30537, null], [30537, 31981, null], [31981, 33202, null], [33202, 34070, null], [34070, 34395, null], [34395, 35135, null], [35135, 35559, null], [35559, 36093, null], [36093, 36564, null], [36564, 36564, null], [36564, 36993, null], [36993, 37760, null], [37760, 37904, null], [37904, 39404, null], [39404, 39829, null], [39829, 40610, null], [40610, 41382, null], [41382, 42072, null], [42072, 43047, null], [43047, 44150, null], [44150, 45427, null], [45427, 45438, null], [45438, 46022, null], [46022, 46166, null]], "google_gemma-3-12b-it_is_public_document": [[0, 141, true], [141, 1045, null], [1045, 2131, null], [2131, 2553, null], [2553, 2853, null], [2853, 3262, null], [3262, 3392, null], [3392, 3812, null], [3812, 3959, null], [3959, 4384, null], [4384, 4913, null], [4913, 6040, null], [6040, 6836, null], [6836, 7392, null], [7392, 7678, null], [7678, 8391, null], [8391, 8816, null], [8816, 9425, null], [9425, 10373, null], [10373, 11097, null], [11097, 11097, null], [11097, 11097, null], [11097, 11120, null], [11120, 12010, null], [12010, 12060, null], [12060, 13090, null], [13090, 13660, null], [13660, 13660, null], [13660, 14792, null], [14792, 14929, null], [14929, 15868, null], [15868, 16115, null], [16115, 16930, null], [16930, 17093, null], [17093, 17924, null], [17924, 18152, null], [18152, 19527, null], [19527, 19527, null], [19527, 19560, null], [19560, 20351, null], [20351, 21696, null], [21696, 22268, null], [22268, 22952, null], [22952, 24009, null], [24009, 24859, null], [24859, 25707, null], [25707, 25875, null], [25875, 26187, null], [26187, 26529, null], [26529, 26995, null], [26995, 27759, null], [27759, 28499, null], [28499, 29036, null], [29036, 30537, null], [30537, 31981, null], [31981, 33202, null], [33202, 34070, null], [34070, 34395, null], [34395, 35135, null], [35135, 35559, null], [35559, 36093, null], [36093, 36564, null], [36564, 36564, null], [36564, 36993, null], [36993, 37760, null], [37760, 37904, null], [37904, 39404, null], [39404, 39829, null], [39829, 40610, null], [40610, 41382, null], [41382, 42072, null], [42072, 43047, null], [43047, 44150, null], [44150, 45427, null], [45427, 45438, null], [45438, 46022, null], [46022, 46166, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46166, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46166, null]], "pdf_page_numbers": [[0, 141, 1], [141, 1045, 2], [1045, 2131, 3], [2131, 2553, 4], [2553, 2853, 5], [2853, 3262, 6], [3262, 3392, 7], [3392, 3812, 8], [3812, 3959, 9], [3959, 4384, 10], [4384, 4913, 11], [4913, 6040, 12], [6040, 6836, 13], [6836, 7392, 14], [7392, 7678, 15], [7678, 8391, 16], [8391, 8816, 17], [8816, 9425, 18], [9425, 10373, 19], [10373, 11097, 20], [11097, 11097, 21], [11097, 11097, 22], [11097, 11120, 23], [11120, 12010, 24], [12010, 12060, 25], [12060, 13090, 26], [13090, 13660, 27], [13660, 13660, 28], [13660, 14792, 29], [14792, 14929, 30], [14929, 15868, 31], [15868, 16115, 32], [16115, 16930, 33], [16930, 17093, 34], [17093, 17924, 35], [17924, 18152, 36], [18152, 19527, 37], [19527, 19527, 38], [19527, 19560, 39], [19560, 20351, 40], [20351, 21696, 41], [21696, 22268, 42], [22268, 22952, 43], [22952, 24009, 44], [24009, 24859, 45], [24859, 25707, 46], [25707, 25875, 47], [25875, 26187, 48], [26187, 26529, 49], [26529, 26995, 50], [26995, 27759, 51], [27759, 28499, 52], [28499, 29036, 53], [29036, 30537, 54], [30537, 31981, 55], [31981, 33202, 56], [33202, 34070, 57], [34070, 34395, 58], [34395, 35135, 59], [35135, 35559, 60], [35559, 36093, 61], [36093, 36564, 62], [36564, 36564, 63], [36564, 36993, 64], [36993, 37760, 65], [37760, 37904, 66], [37904, 39404, 67], [39404, 39829, 68], [39829, 40610, 69], [40610, 41382, 70], [41382, 42072, 71], [42072, 43047, 72], [43047, 44150, 73], [44150, 45427, 74], [45427, 45438, 75], [45438, 46022, 76], [46022, 46166, 77]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46166, 0.04185]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
621218d65c24be6120dac8b6b771e056ecd559d5
MICA: A Holistic Approach to Fast In-Memory Key-Value Storage Hyeontaek Lim,1 Dongsu Han,2 David G. Andersen,1 Michael Kaminsky3 1Carnegie Mellon University, 2KAIST, 3Intel Labs Abstract MICA is a scalable in-memory key-value store that handles 65.6 to 76.9 million key-value operations per second using a single general-purpose multi-core system. MICA is over 4–13.5x faster than current state-of-the-art systems, while providing consistently high throughput over a variety of mixed read and write workloads. MICA takes a holistic approach that encompasses all aspects of request handling, including parallel data access, network request handling, and data structure design, but makes unconventional choices in each of the three domains. First, MICA optimizes for multi-core architectures by enabling parallel access to partitioned data. Second, for efficient parallel data access, MICA maps client requests directly to specific CPU cores at the server NIC level by using client-supplied information and adopts a light-weight networking stack that bypasses the kernel. Finally, MICA's new data structures—circular logs, lossy concurrent hash indexes, and bulk chaining—handle both read- and write-intensive workloads at low overhead. 1 Introduction In-memory key-value storage is a crucial building block for many systems, including popular social networking sites (e.g., Facebook) [36]. These storage systems must provide high performance when serving many small objects, whose total volume can grow to TBs and more [5]. While much prior work focuses on high performance for read-mostly workloads [15, 30, 32, 37], in-memory key-value storage today must also handle write-intensive workloads, e.g., to store frequently-changing objects [2, 5, 36]. Systems optimized only for reads often waste resources when faced with significant write traffic; their inefficiencies include lock contention [32], expensive updates to data structures [15, 30], and complex memory management [15, 32, 36]. In-memory key-value storage also requires low-overhead network communication between clients and servers. Key-value workloads often include a large number of small key-value items [5] that require key-value storage to handle short messages efficiently. Systems using standard socket I/O, optimized for bulk communication, incur high network stack overhead at both kernel- and user-level. Current systems attempt to batch requests at the client to amortize this overhead, but batching increases latency, and large batches are unrealistic in large cluster key-value stores because it is more difficult to accumulate multiple requests being sent to the same server from a single client [36]. MICA (Memory-store with Intelligent Concurrent Access) is an in-memory key-value store that achieves high throughput across a wide range of workloads. MICA can provide either store semantics (no existing items can be removed without an explicit client request) or cache semantics (existing items may be removed to reclaim space for new items). Under write-intensive workloads with a skewed key popularity, a single MICA node serves 70.4 million small key-value items per second (Mops), which is 10.8x faster than the next fastest system. For skewed, read-intensive workloads, MICA’s 65.6 Mops is at least 4x faster than other systems even after modifying them to use our kernel bypass. MICA achieves 75.5–76.9 Mops under workloads with a uniform key popularity. MICA achieves this through the following techniques: Fast and scalable parallel data access: MICA’s data access is fast and scalable, using data partitioning and exploiting CPU parallelism within and between cores. Its EREW mode (Exclusive Read Exclusive Write) minimizes costly inter-core communication, and its CREW mode (Concurrent Read Exclusive Write) allows multiple cores to serve popular data. MICA’s techniques achieve consistently high throughput even under skewed workloads, one weakness of prior partitioned stores. Network stack for efficient request processing: MICA interfaces with NICs directly, bypassing the kernel, and uses client software and server hardware to direct remote key-value requests to appropriate cores where the requests can be processed most efficiently. The network stack achieves zero-copy packet I/O and request processing. New data structures for key-value storage: New memory allocation and indexing in MICA, optimized for store and cache separately, exploit properties of key-value workloads to accelerate write performance with simplified memory management. 2 System Goals In this section, we first clarify the non-goals and then discuss the goals of MICA. Non-Goals: We do not change the cluster architecture. It can still shard data and balance load across nodes, and perform replication and failure recovery. We do not aim to handle large items that span multiple packets. Most key-value items will fit comfortably in a single packet [5]. Clients can store a large item in a traditional key-value system and put a pointer to that system in MICA. This only marginally increases total latency; one extra round-trip time for indirection is smaller than the transfer time of a large item sending multiple packets. We do not strive for durability: All data is stored in DRAM. If needed, log-based mechanisms such as those from RAMCloud [37] would be needed to allow data to persist across power failures or reboots. MICA instead strives to achieve the following goals: High single-node throughput: Sites such as Facebook replicate some key-value nodes purely to handle load [36]. Faster nodes may reduce cost by requiring fewer of them overall, reducing the cost and overhead of replication and invalidation. High-speed nodes are also more able to handle load spikes and popularity hot spots. Importantly, using fewer nodes can also reduce job latency by reducing the number of servers touched by client requests. A single user request can create more than 500 key-value requests [36], and when these requests go to many nodes, the time until all replies arrive increases, delaying completion of the user request [10]. Having fewer nodes reduces fan-out, and thus, can improve job completion time. Low end-to-end latency: The end-to-end latency of a remote key-value request greatly affects performance when a client must send back-to-back requests (e.g., when subsequent requests are dependent). The system should minimize both local key-value processing latency and the number of round-trips between the client and server. Consistent performance across workloads: Real workloads often have a Zipf-distributed key popularity [5], and it is crucial to provide fast key-value operations regardless of skew. Recent uses of in-memory key-value storage also demand fast processing for write-intensive workloads [2, 36]. Handle small, variable-length key-value items: Most key-value items are small [5]. Thus, it is important to process requests for them efficiently. Ideally, key-value request processing over the network should be as fast as packet processing in software routers—40 to 80 Gbps [12, 19]. Variable-length items require careful memory management to reduce fragmentation that can waste substantial space [5]. Key-value storage interface and semantics: The system must support standard single-key requests (e.g., GET(key), PUT(key, value), DELETE(key)) that are common in systems such as Memcached. In cache mode, the system performs automatic cache management that may evict stored items at its discretion (e.g., LRU); in store mode, the system must not remove any stored items without clients’ permission while striving to achieve good memory utilization. Commodity hardware: Using general-purpose hardware reduces the cost of development, equipment, and operation. Today’s server hardware can provide high-speed I/O [12, 22], comparable to that of specialized hardware such as FPGAs and RDMA-enabled NICs. Although recent studies tried to achieve some of these goals, none of their solutions comprehensively address them. Some systems achieve high throughput by supporting only small fixed-length keys [33]. Many rely on client-based request batching [15, 30, 33, 36] to amortize high network I/O overhead, which is less effective in a large installation of key-value stores [14]: use specialized hardware, often with multiple client-server round-trips and/or no support for item eviction (e.g., FPGAs [7, 29], RDMA-enabled NICs [35]); or do not specifically address remote request processing [45]. Many focus on uniform and/or read-intensive workloads; several systems lack evaluation for skewed workloads [7, 33, 35], and some systems have lower throughput for write-intensive workloads than read-intensive workloads [30]. Several systems attempt to handle memory fragmentation explicitly [36], but there are scenarios where the system never reclaimed fragmented free memory, as we describe in the next section. The fast packet processing achieved by software routers and low-overhead network stacks [12, 19, 20, 41, 43] set a bar for how fast a key-value system might operate on general-purpose hardware, but do not teach how their techniques apply to the higher-level processing of key-value requests. 3 Key Design Choices Achieving our goals requires rethinking how we design parallel data access, the network stack, and key-value data structures. We make an unconventional choice for each; we discuss how we overcome its potential drawbacks to achieve our goals. Figure 1 depicts how these components fit together. 3.1 Parallel Data Access Exploiting the parallelism of modern multi-core systems is crucial for high performance. The most common access models are concurrent access and exclusive access: Concurrent access is used by most key-value systems [15, 30, 36]. As in Figure 2 (a), multiple CPU cores can access the shared data. The integrity of the data structure must be maintained using mutexes [36], optimistic locking [15, 30], or lock-free data structures [34]. Unfortunately, concurrent writes scale poorly: they incur frequent cache line transfer between cores, because only one core can hold the cache line of the same memory location for writing at the same time. **Figure 1:** Components of in-memory key-value stores. MICA’s key design choices in §3 and their details in §4. **Figure 2:** Parallel data access models. **Exclusive access** has been explored less often for key-value storage [6, 25, 33]. Only one core can access part of the data, as in Figure 2 (b). By partitioning the data (“sharding”), each core exclusively accesses its own partition in parallel without inter-core communication. Prior work observed that partitioning can have the best throughput and scalability [30, 45], but cautions that it lowers performance when the load between partitions is imbalanced, as happens under skewed key popularity [15, 30, 45]. Furthermore, because each core can access only data within its own partition, request direction is needed to forward requests to the appropriate CPU core. **MICA’s parallel data access:** MICA partitions data and mainly uses exclusive access to the partitions. MICA exploits CPU caches and packet burst I/O to disproportionately speed more loaded partitions, nearly eliminating the penalty from skewed workloads. MICA can fall back to concurrent reads if the load is extremely skewed, but avoids concurrent writes, which are always slower than exclusive writes. Section 4.1 describes our data access models and partitioning scheme. ### 3.2 Network Stack This section discusses how MICA avoids network stack overhead and directs packets to individual cores. #### 3.2.1 Network I/O Network I/O is one of the most expensive processing steps for in-memory key-value storage. TCP processing alone may consume 70% of CPU time on a many-core optimized key-value store [33]. The **socket I/O** used by most in-memory key-value stores [15, 30, 33, 45] provides portability and ease of development. However, it underperforms in packets per second because it has high `per_read()` overhead. Many systems therefore often have clients include a batch of requests in a single larger packet to amortize I/O overhead. **Direct NIC access** is common in software routers to achieve line-rate packet processing [12, 19]. This raw access to NIC hardware bypasses the kernel to minimize the packet I/O overhead. It delivers packets in bursts to efficiently use CPU cycles and the PCIe bus connecting NICs and CPUs. Direct access, however, precludes useful TCP features such as retransmission, flow control, and congestion control. **MICA’s network I/O** uses direct NIC access. By targeting only small key-value items, it needs fewer transport-layer features. Clients are responsible for retransmitting packets if needed. Section 4.2 describes such issues and our design in more detail. #### 3.2.2 Request Direction Request direction delivers client requests to CPU cores for processing. Modern NICs can deliver packets to specific cores for load balancing or core affinity using hardware-based packet classification and multi-queue support. **Flow-level core affinity** is available using two methods: Receive-Side Scaling (RSS) [12, 19] sends packets to cores based by hashing the packet header 5-tuple to identify which RX queue to target. Flow Director (FDir) [41] can more flexibly use different parts of the packet header plus a user-supplied table to map header values to RX queues. Efficient network stacks use affinity to reduce inter-core contention for TCP control blocks [20, 41]. Flow affinity reduces only transport layer contention, not application-level contention [20], because a single transport flow can contain requests for any objects (Figure 3 (a)). Even for datagrams, the benefit of flow affinity is small due to a lack of locality across datagrams [36]. **Object-level core affinity** distributes requests to cores based upon the application’s partitioning. For example, requests sharing the same key would all go to the core handling that key’s partition (Figure 3 (b)). --- 1. Because we target small key-value requests, we will use requests and packets interchangeably. Systems using exclusive access require object-level core affinity, but commodity NIC hardware cannot directly parse and understand application-level semantics. Software request redirection (e.g., message passing [33]) incurs inter-core communication, which the exclusive access model is designed to avoid. **MICA’s request direction** uses Flow Director [23, 31]. Its clients then encode object-level affinity information in a way Flow Director can understand. Servers, in turn, inform clients about the object-to-partition mapping. Section 4.2 describes how this mechanism works. ### 3.3 Key-Value Data Structures This section describes MICA’s choice for two main data structures: allocators that manage memory space for storing key-value items and indexes to find items quickly. #### 3.3.1 Memory Allocator A **dynamic object allocator** is a common choice for storing variable-length key-value items (Figure 4 (a)). Systems such as Memcached typically use a slab approach: they divide object sizes into classes (e.g., 48-byte, 56-byte, ..., 1-MiB\(^2\)) and maintain separate (“segregated”) memory pools for these classes [15, 36]. Because the amount of space that each class uses typically varies over time, the systems use a global memory manager that allocates large memory blocks (e.g., 1 MiB) to the pools and dynamically rebalances allocations between classes. The major challenge for dynamic allocation is the memory fragmentation caused when blocks are not fully filled. There may be no free blocks or free objects for some size classes while blocks from other classes are partly empty after deletions. Defragmentation packs objects of each object tightly to make free blocks, which involves expensive memory copy. This process is even more complex if the memory manager performs rebalancing concurrently with threads accessing the memory for other reads and writes. A **dynamic object allocator** is a common choice for storing variable-length key-value items (Figure 4 (a)). Systems such as Memcached typically use a slab approach: they divide object sizes into classes (e.g., 48-byte, 56-byte, ..., 1-MiB\(^2\)) and maintain separate (“segregated”) memory pools for these classes [15, 36]. Because the amount of space that each class uses typically varies over time, the systems use a global memory manager that allocates large memory blocks (e.g., 1 MiB) to the pools and dynamically rebalances allocations between classes. **Append-only log structure** are write-friendly, placing new data items at the end of a linear data structure called a “log” (Figure 4 (b)). To update an item, it simply inserts a new item to the log that overrides the previous value. Inserts and updates thus access memory sequentially, incurring fewer cache and TLB misses, making logs particularly suited for bulk data writes. This approach is common in flash memory stores due to the high cost of random flash writes [3, 4, 28], but has been used in only a few in-memory key-value systems [37]. Garbage collection is crucial to space efficiency. It reclaim space occupied by overwritten and deleted objects by moving live objects to a new log and removing the old log. Unfortunately, garbage collection is costly and often reduces performance because of the large amount of data it must copy, trading memory efficiency against request processing speed. **MICA’s memory allocator**: MICA uses separate memory allocators for cache and store semantics. Its cache mode uses a log structure with inexpensive garbage collection and in-place update support (Section 4.3.1). MICA’s allocator provides fast inserts and updates, and exploits cache semantics to eliminate log garbage collection and drastically simplify free space defragmentation. Its store mode uses segregated fits [42, 47] that share the unified memory space to avoid rebalancing size classes (Section 4.3.3). #### 3.3.2 Indexing: Read-oriented vs. Write-friendly **Read-oriented index**: Common choices for indexing are hash tables [15, 33, 36] or tree-like structures [30]. However, conventional data structures are much slower for writes compared to reads; hash tables examine many slots before finding a space for the new item [15], and trees may require multiple operations to maintain structural invariants [30]. **Write-friendly index**: Hash tables using chaining [33, 36] can insert new items without accessing many memory locations, but they suffer a time-space tradeoff: by having long chains (few hash buckets), an item lookup must follow a long chain of items, this requiring multiple random dependent memory accesses; when chains are short (many hash buckets), memory overhead to store chaining pointers increases. **Lossy data structures** are rather unusual in in-memory key-value storage and studied only in limited contexts [7], but it is the standard design in hardware indexes such as CPU caches [21]. **MICA’s index**: MICA uses new index data structures to offer both high-speed read and write. In cache mode, MICA’s lossy index also leverages the cache semantics to achieve high insertion speed; it evicts an old item in the hash table when a hash collision occurs instead of spending system resources to resolve the collision. By using the memory allocator’s eviction support, the MICA lossy index can avoid evicting recently-used items (Section 4.3.2). The MICA lossless index uses bulk chaining, which allocates cache line-aligned space to a bucket for each chain segment. This keeps the chain length short and space efficiency high (Section 4.3.3). 4 MICA Design This section describes each component in MICA and discusses how they operate together to achieve its goals. 4.1 Parallel Data Access This section explains how CPU cores access data in MICA, but assumes that cores process only the requests for which they are responsible. Later in Section 4.2, we discuss how MICA assigns remote requests to CPU cores. 4.1.1 Keyhash-Based Partitioning MICA creates one or more partitions per CPU core and stores key-value items in a partition determined by their key. Such horizontal partitioning is often used to shard across nodes [4, 11], but some key-value storage systems also use it across cores within a node [6, 25, 33]. MICA uses a keyhash to determine each item’s partition. A keyhash is the 64-bit hash of an item’s key calculated by the client and used throughout key-value processing in MICA. MICA uses the first few high order bits of the keyhash to obtain the partition index for the item. Keyhash partitioning uniformly maps keys to partitions, reducing the request distribution imbalance. For example, in a Zipf-distributed population of size $192 \times 2^{30}$ (192 Mi) with skewness 0.99 as used by YCSB [9], the most popular key is $9.3 \times 10^{6}$ times more frequently accessed than the average; after partitioning keys into 16 partitions, however, the most popular partition is only 53% more frequently requested than the average. MICA retains high throughput under this remaining partition-level skew because it can process requests in “hot” partitions more efficiently, for two reasons. First, a partition is popular because it contains “hot” items; these hot items naturally create locality in data access. With high locality, MICA experiences fewer CPU cache misses when accessing items. Second, the skew causes packet I/O to be more efficient for popular partitions (described in Section 4.2.1). As a result, throughput for the Zipf-distributed workload is 86% of the uniformly-distributed workload, making MICA’s partitioned design practical even under skewed workloads. 4.1.2 Operation Modes MICA can operate in EREW (Exclusive Read Exclusive Write) or CREW (Concurrent Read Exclusive Write). EREW assigns a single CPU core to each partition for all operations. No concurrent access to partitions eliminates synchronization and inter-core communication, making MICA scale linearly with CPU cores. CREW allows any core to read partitions, but only a single core can write. This combines the benefit of concurrent read and exclusive write; the former allows all cores to process read requests, while the latter still reduces expensive cache line transfer. CREW handles reads efficiently under highly skewed load, at the cost of managing read-write conflicts. MICA minimizes the synchronization cost with efficient optimistic locking [48] (Section 4.3.2). Supporting cache semantics in CREW, however, raises a challenge for read (GET) requests: During a GET, the cache may need to update cache management information. For example, policies such as LRU use bookkeeping to remember recently used items, which can cause conflicts and cache-line bouncing among cores. This, in turn, defeats the purpose of using exclusive writes. To address this problem, we choose an approximate approach: MICA counts reads only from the exclusive-write core. Clients round-robin CREW reads across all cores in a NUMA domain, so this is effectively a sampling-based approximation to, e.g., LRU replacement as used in MICA’s item eviction support (Section 4.3.1). To show performance benefits of EREW and CREW, our MICA prototype also provides the CRCW (Concurrent Read Concurrent Write) mode, in which MICA allows multiple cores to read and write any partition. This effectively models concurrent access to the shared data in non-partitioned key-value systems. 4.2 Network Stack The network stack in MICA provides network I/O to transfer packet data between NICs and the server software, and request direction to route requests to an appropriate CPU core to make subsequent key-value processing efficient. Exploiting the small key-value items that MICA targets, request and response packets use UDP. Despite clients not benefiting from TCP’s packet loss recovery and flow/congestion control, UDP has been used widely for read requests (e.g., GET) in large-scale deployments of in-memory key-value storage systems [36] for low latency and low overhead. Our protocol includes sequence numbers in packets, and our application relies on the idempotency of GET and PUT operations for simple and stateless application-driven loss recovery, if needed: some queries may not be useful past a deadline, and in many cases, the network is provisioned well, making retransmission rare and congestion control less crucial [36]. 4.2.1 Direct NIC Access MICA uses Intel’s DPDK [22] instead of standard socket I/O. This allows our user-level server software to control NICs and transfer packet data with minimal overhead. MICA differs from general network processing [12, 19, 41] that has used direct NIC access in that MICA is an application that processes high-level key-value requests. In NUMA (non-uniform memory access) systems with multiple CPUs, NICs may have different affinities to CPUs. For example, our evaluation hardware has two \[^3^\text{th key constitutes } 1/(\phi^{0.99}H_{n,0.99}) \text{ of total requests, where } H_{n,0.99} = \sum_{i=1}^{n} (1/\phi^{i-0.99}) \text{ and } n \text{ is the total number of keys.}\] CPU and NIC only accesses packet buffers stored in their respective NUMA domains. MICA uses NIC multi-queue support to allocate a dedicated RX and TX queue to each core. Cores exclusively access their own queues without synchronization in a similar way to EREW data access. By directing a packet to an RX queue, the packet can be processed by a specific core, as we discuss in Section 4.2.2. **Burst packet I/O:** MICA uses the DPDK’s burst packet I/O to transfer multiple packets (up to 32 in our implementation) each time it requests packets from RX queues or transmits them to TX queues. Burst I/O reduces the per-packet cost of accessing and modifying the queue, while adding only trivial delay to request processing because the burst size is small compared to the packet processing rate. Importantly, burst I/O helps handle skewed workloads. A core processing popular partitions spends more time processing requests, and therefore performs packet I/O less frequently. The lower I/O frequency increases the burst size, reducing the per-packet I/O cost (Section 5.2). Therefore, popular partitions have more CPU available for key-value processing. An unpopular partition’s core has less frequently. The lower I/O frequency decreases the per-packet I/O cost, but handles fewer requests. **Zero-copy processing:** MICA avoids packet data copy throughout RX/TX and request processing. MICA uses MTU-sized packet buffers for RX even if incoming requests are small. Upon receiving a request, MICA avoids memory allocation and copying by reusing the request packet to construct a response: it flips the source and destination addresses and ports in the header and updates only the part of the packet payload that differs between the request and response. ### 4.2.2 Client-Assisted Hardware Request Direction Modern NICs help scale packet processing by directing packets to different RX queues using hardware features such as Receiver-Side Scaling (RSS) and Flow Director (FDir) [12, 19, 41] based on the packet header. Because each MICA key-value request is an individual packet, we wish to use hardware packet direction to directly send packets to the appropriate queue based upon the key. Doing so is much more efficient than redirecting packets in software. Unfortunately, the NIC alone cannot provide key-based request direction: RSS and FDir cannot classify based on the packet payload, and cannot examine variable length fields such as request keys. **Client assistance:** We instead take advantage of the opportunity to co-design the client and server. The client caches information from a server directory about the operation mode (EREW or CREW), number of cores, NUMA domains, and NICs, and number of partitions. The server programs FDir to use the UDP destination port, without hashing, (“perfect match filter” [23]), as an index into a table mapping UDP port numbers to a destination RX queue. Key hashing only slightly burdens clients. Using fast string hash functions such as CityHash [8], a single client machine equipped with dual 6-core CPUs on our testbed can generate over 40 M requests/second with client-side key hashing. Clients include the keyhash in requests, and servers reuse the embedded keyhash when they need a keyhash during the request processing to benefit from offloaded hash computation. Client-assisted request direction using NIC hardware allows efficient request processing. Our results in Section 5.5 show that an optimized software-based request direction that receives packets from any core and distributes them to appropriate cores is significantly slower than MICA’s hardware-based approach. ### 4.3 Data Structure MICA, in cache mode, uses circular logs to manage memory for key-value items and lossy concurrent hash indexes to index the stored items. Both data structures exploit cache semantics to provide fast writes and simple memory management. Each MICA partition consists of a single circular log and lossy concurrent hash index. MICA provides a store mode with straightforward extensions using segregated fits to allocate memory for key-value items and bulk chaining to convert the lossy concurrent hash indexes into lossless ones. #### 4.3.1 Circular Log MICA stores items in its circular log by appending them to the *tail* of the log (Figure 5). This results in a space-efficient packing. It updates items in-place as long as the --- 4To avoid confusion between partition indices and the core indices, we use different ranges of UDP ports; a partition may be mapped to a core whose index differs from the partition index. new size of the key+value does not exceed the size of the item when it was first inserted. The size of the circular log is bounded and does not change, so to add a new item to a full log, MICA evicts the oldest item(s) at the head of the log to make space. Each entry includes the key and value length, key, and value. To locate the next item in the log and support item resizing, the entry contains the initial item size, and for fast lookup, it stores the keyhash of the item. The entry has an expire time set by the client to ignore stale data. **Garbage collection and defragmentation:** The circular log eliminates the expensive garbage collection and free space defragmentation that are required in conventional log structures and dynamic memory allocators. Previously deleted items in the log are automatically collected and removed when new items enter the log. Almost all free space remains contiguously between the tail and head. **Exploiting the eviction of live items:** Items evicted at the head are not reinserted to the log even if they have not yet expired. In other words, the log may delete items without clients knowing it. This behavior is valid in cache workloads; a key-value store must evict items when it becomes full. For example, Memcached [32] uses LRU to remove items and reserve space for new items. MICA uses this item eviction to implement common eviction schemes at low cost. Its “natural” eviction is FIFO. MICA can provide LRU by reinserting any requested items at the tail because only the least recently used items are evicted at the head. MICA can approximate LRU by reinserting requested items selectively—by ignoring items recently (re)inserted and close to the tail; this approximation offers eviction similar to LRU without frequent reinserts, because recently accessed items remain close to the tail and far from the head. A second challenge for conventional logs is that any reference to an evicted item becomes dangling. MICA does not store back pointers in the log entry to discover all references to the entry; instead, it provides detection, and removes dangling pointers incrementally (Section 4.3.2). **Low-level memory management:** MICA uses hugepages and NUMA-aware allocation. Hugepages (2 MiB in x86-64) use fewer TLB entries for the same amount of memory, which significantly reduces TLB misses during request processing. Like the network stack, MICA allocates memory for circular logs such that cores access only local memory. Without explicit range checking, accessing an entry near the end of the log (e.g., at $2^{34} - 8$ in the example below) could cause an invalid read or segmentation fault by reading off the end of the range. To avoid such errors without range checking, MICA manually maps the virtual memory addresses right after the end of the log to the same physical page as the first page of the log, making the entire log appear locally contiguous: 4.3.2 Lossy Concurrent Hash Index MICA’s hash index locates key-value items in the log using a set-associative cache similar to that used in CPU caches. As shown in Figure 6, a hash index consists of multiple buckets (configurable for the workload), and each bucket has a fixed number of index entries (configurable in the source code; 15 in our prototype to occupy exactly two cache lines). MICA uses a portion of the keyhashes to determine an item’s bucket; the item can occupy any index entry of the bucket unless there is a duplicate. Each index entry contains partial information for the item: a tag and the item offset within the log. A tag is another portion of the indexed item’s keyhash used for filtering lookup keys that do not match: it can tell whether the indexed item will never match against the lookup key by comparing the stored tag and the tag from the lookup keyhash. We avoid using a zero tag value by making it one because we use the zero value to indicate an empty index entry. Items are deleted by writing zero values to the index entry; the entry in the log will be automatically garbage collected. Note that the parts of keyhashes used for the partition index, the bucket number, and the tag do not overlap. Our prototype uses 64-bit keyhashes to provide sufficient bits. **Lossiness:** The hash index is lossy. When indexing a new key-value item into a full bucket of the hash index, the index eviction an index entry to accommodate the new item. The item evicted is determined by its age; if the item offset is most behind the tail of the log, the item is the oldest (or least recently used if the log is using LRU), and the associated index entry of the item is reclaimed. This lossy property allows fast insertion. It avoids expensive resolution of hash collisions that lossless indexes of other key-value stores require [15, 33]. As a result, MICA’s insert speed is comparable to lookup speed. **Handling dangling pointers:** When an item is evicted from the log, MICA does not delete its index entry. Although it is possible to store back pointers in the log entry, updating the hash index requires a random memory write and is complicated due to locking if the index is being accessed concurrently, so MICA does not. As a result, index pointers can “dangle,” pointing to invalid entries. To address this problem, MICA uses large pointers for head/tail and item offsets. As depicted in Figure 7, MICA’s index stores log offsets that are wider than needed to address the full size of the log (e.g., 48-bit offsets vs 34 bits for a 16 GiB log). MICA detects a dangling pointer before using it by checking if the difference between the log tail and the item offset is larger than the actual log size.\(^5\) If the tail wraps around the 48-bit size, however, a dangling pointer may appear valid again, so MICA scans the index incrementally to remove stale pointers. This scanning must merely complete a full cycle before the tail wraps around in its wide offset space. The speed at which it wraps is determined by the increment rate of the tail and the width of the item offset. In practice, full scanning is infrequent even if writes occur very frequently. For example, with 48-bit offsets and writes occurring at 2\(^{30}\) bytes/second (millions of operations/second), the tail wraps every 2\(^{48}–30\) seconds. If the index has 2\(^{24}\) buckets, MICA must scan only 2\(^{6}\) buckets per second, which adds negligible overhead. **Supporting concurrent access:** MICA’s hash index must behave correctly if the system permits concurrent operations (e.g., CREW). For this, each bucket contains a 32-bit version number. It performs reads optimistically using this version counter to avoid generating memory writes while satisfying GET requests [15, 30, 48]. When accessing an item, MICA checks if the initial state of the version number of the bucket is even-numbered, and upon completion of data fetch from the index and log, it reads the version number again to check if the final version number is equal to the initial version number. If either check fails, it repeats the read request processing from the beginning. For writes, MICA increments the version number by one before beginning, and increments the version number by one again after finishing all writes. In CRCW mode, which allows multiple writers to access the same bucket, a writer also spins until the initial version number is even (i.e., no other writers to this bucket) using a compare-swap operation instruction. Our MICA prototype uses different code to optimize locking. It uses conventional instructions to manipulate version numbers to exploit memory access ordering on the x86 architecture [48] in CREW mode where there is only one writer. EREW mode does not require synchronization between cores, so MICA ignores version numbers. Because of such a hard-coded optimization, the current prototype lacks support for runtime switching between the operation modes. **Multi-stage prefetching:** To retrieve or update an item, MICA must perform request parsing, hash index lookup, and log entry retrieval. These stages cause random memory access that can significantly lower system performance if cores stall due to CPU cache and TLB misses. MICA uses multi-stage prefetching to interleave computation and memory access. MICA applies memory prefetching for random memory access done at each processing stage in sequence. For example, when a burst of 8 RX packets arrives, MICA fetches packets 0 and 1 and prefetches packets 2 and 3. It decodes the requests in packets 0 and 1, and prefetches buckets of the hash index that these requests will access. MICA continues packet payload prefetching for packets 4 and 5. It then prefetches log entries that may be accessed by the requests of packets 0 and 1 while prefetching the hash index buckets for packets 2 and 3, and the payload of packet 6 and 7. MICA continues this pipeline until all requests are processed. ### 4.3.3 Store Mode The store mode of MICA uses segregated fits [42, 47] similar to fast malloc implementations [27], instead of the circular log. Figure 8 depicts this approach. MICA defines multiple size classes incrementing by 8 bytes covering all supported item sizes, and maintains a freelist for each size class (a linked list of pointers referencing unoccupied memory regions that are at least as large as the size class). When a new item is inserted, MICA chooses the smallest size class that is at least as large as the item size and has any free space. It stores the item in the free space, and inserts any unused region of the free space into a freelist that matches that region’s size. When an item is deleted, MICA coalesces any adjacent free regions using boundary \(^5\)(Tail – ItemOffset + 2\(^{48}\)) mod 2\(^{48}\) > LogSize. tags [26] to recreate a large free region. MICA’s segregated fits differ from the simple segregated storage used in Memcached [15, 32]. MICA maintains a unified space for all size classes; on the contrary, Memcached’s SLAB allocator dynamically assigns memory blocks to size classes, which effectively partitions the memory space according to size classes. The unified space of MICA eliminates the need to rebalance size classes unlike the simple segregated storage. Using segregated fits also makes better use of memory because MICA already has partitioning done with keyhashes; a SLAB allocator introducing another partitioning would likely waste memory by allocating a whole block for only a few items, resulting in low memory occupancy. MICA converts its lossy concurrent hash index into a lossless hash index by using bulk chaining. Bulk chaining is similar to the traditional chaining method in hash tables; it adds more memory space to the buckets that contain an excessive number of items. Figure 9 shows the design of the lossless hash index. MICA uses the lossy concurrent hash index as the main buckets and allocates space for separate spare buckets that are fewer than the main buckets. When a bucket experiences an overflow, whether it is a main bucket or spare bucket, MICA adds an unused spare bucket to the full bucket to form a bucket chain. If there are no more spare buckets available, MICA rejects the new item and returns an out-of-space error to the client. This data structure is friendly to memory access. The main buckets store most of items (about 95%), keeping the number of random memory read for an index lookup close to 1; as a comparison, cuckoo hashing [39] used in improved Memcached systems [15] would require 1.5 random memory accesses per index lookup in expectation. MICA also allows good memory efficiency; because the spare buckets only store overflow items, making the number of spare buckets 10% of the main buckets allows the system to store the entire dataset of 192 Mi items in our experiments (Section 5). 5 Evaluation We answer four questions about MICA in this section: - Does it perform well under diverse workloads? - Does it provide good latency? - How does it scale with more cores and NIC ports? - How does each component affect performance? Our results show that MICA has consistently high throughput and low latency under a variety of workloads. It scales nearly linearly, using CPU cores and NIC ports efficiently. Each component of MICA is needed. MICA achieves 65.6–76.9 million operations/second (Mops), which is over 4–13x faster than the next fastest system; the gap widens as the fraction of write requests increases. MICA is written in 12 K lines of C and runs on x86-64 GNU/Linux. Packet I/O uses the Intel DPDK 1.4.1 [22]. Compared systems: We use custom versions of open-source Memcached [32], MemC3 [15], Masstree [30], and RAMCloud [37]. The revisions of the original code we used are: Memcached: 87e2f36; MemC3: an internal version; Masstree: 4fb946; RAMCloud: a06889. Note that the compared systems often offer additional capabilities compared to others. For example, Masstree can handle range queries, and RAMCloud offers low latency processing on InfiniBand; on the other hand, these key-value stores do not support automatic item eviction as Memcached systems do. Our evaluation focuses on the performance of the standard features (e.g., single key queries) common to all the compared systems, rather than highlighting the potential performance impact from these semantic differences. Modifications to compared systems: We modify the compared systems to use our lightweight network stack to avoid using expensive socket I/O or special hardware (e.g., InfiniBand). When measuring Memcached’s baseline latency, we use its original network stack using the kernel to obtain the latency distribution that typical Memcached deployments would experience. Our experiments do not use any client-side request batching. We also modified these systems to invoke memory allocation functions though our framework if they use hugepages, because the DPDK requests all hugepages from the OS at initialization and would make the unmodified systems inoperable if they request hugepages from the OS; we kept other memory allocations using no hugepages as-is. Finally, while running experiments, we found that statistics collection in RAMCloud caused lock contention, so we disabled it for better multi-core performance. 5.1 Evaluation Setup Server/client configuration: MICA server runs on a machine equipped with dual 8-core CPUs (Intel Xeon E5-2680 @2.70 GHz), 64 GiB of total system memory, and eight 10-Gb Ethernet ports (four Intel X520-T2’s). Each CPU has 20 MiB of L3 cache. We disabled logical processor support (“Hyper-Threading”). Each CPU accesses the 32 GiB of the system memory that resides in its local NUMA domain over a quad-channel DDR3-1600 bus. Each CPU socket is directly connected to two NICs using PCIe gen2. Access to hardware resources in the remote NUMA domain uses an interconnect between two CPUs (Intel QuickPath). We reserved the half of the memory (16 GiB in each NUMA domain) for hugepages regardless of how MICA and the compared systems use hugepages. MICA allocates 16 partitions in the server, and these partitions are assigned to different cores. We configured the cache version of MICA to use approximate LRU to evict items; MICA reinserts any recently accessed item at the tail if the item is closer to the head than to the tail of the circular log. Two client machines with dual 6-core CPUs (Intel Xeon L5640 @2.27 GHz) and two Intel X520-T2’s generate workloads. The server and clients are directly connected without a switch. Each client is connected to the NICs from both NUMA domains of the server, allowing a client to send a request to any server CPU. **Workloads:** We explore different aspects of the systems by varying the item size, skew, and read-write ratio. We use three datasets as shown in the following table: <table> <thead> <tr> <th>Dataset</th> <th>Key Size (B)</th> <th>Value Size (B)</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>Tiny</td> <td>8</td> <td>8</td> <td>192 Mi</td> </tr> <tr> <td>Small</td> <td>16</td> <td>64</td> <td>128 Mi</td> </tr> <tr> <td>Large</td> <td>128</td> <td>1024</td> <td>8 Mi</td> </tr> </tbody> </table> We use two workload types: uniform and skewed. Uniform workloads use the same key popularity for all requests; skewed workloads use a non-uniform key popularity that follows a Zipf distribution of skewness 0.99, which is the same as YCSB’s [9]. Workloads have a varied ratio between GET and PUT. 50% GET (50% PUT) workloads are write-intensive, and 95% GET (5% PUT) workloads are read-intensive. They correspond to YCSB’s A and B workloads, respectively. **Workload generation:** We use our custom key-value request generator that uses similar techniques to our lightweight network stack to send more than 40 Mops of key-value requests per machine to saturate the link. It uses approximation techniques of Zipf distribution generation [17, 38] for fast skewed workload generation. To find the maximum meaningful throughput of a system, we adjust the workload generation rate to allow only marginal packet losses (< 1% at any NIC port). We could generate requests at the highest rate to cause best-effort request processing (which can boost measured throughput more than 10%), as is commonly done in throughput measurement of software routers [12, 19], but we avoid this method because we expect that real deployments of in-memory key-value stores would not tolerate excessive packet losses, and such flooding can distort the intended skew in the workload by causing biased packet losses at different cores. The workload generator does not receive every response from the server. On our client machines, receiving packets whose size is not a multiple of 64 bytes is substantially slower due to an issue in the PCIe bus [18]. The workload generator works around this slow RX by sampling responses to perform fewer packet RX from NIC to CPU. It uses its real source MAC addresses for only a fraction of requests, causing its NIC to drop the responses to the other requests. By looking at the sampled responses, the workload generator can validate that the server has correctly processed the requests. Our server is unaffected from this issue and performs full packet RX. ### 5.2 System Throughput We first compare the full-system throughput. MICA uses EREW with all 16 cores. However, we use a different number of cores for the other systems to obtain their best throughput because some of them (Memcached, MemC3, and RAMCloud) achieve higher throughput with fewer cores (Section 5.4). The throughput numbers are calculated from the actual number of responses sent to the clients after processing the requests at the server. We denote the cache version of MICA by MICA-c and the store version of MICA by MICA-s. Figure 10 (top) plots the experiment result using tiny key-value items. MICA performs best, regardless of the skew or the GET ratio. MICA’s throughput reaches 75.5–76.9 Mops for uniform workloads and 65.6–70.5 Mops for skewed ones; its parallel data access does not incur more than a 14% penalty for skewed workloads. MICA uses 54.9–66.4 Gbps of network bandwidth at this processing speed—this speed is very close to 66.6 Gbps that our network stack can handle when doing packet I/O only. The next best system is Masstree at 16.5 Mops, while others are below 6.1 Mops. All systems except MICA suffer noticeably under write-intensive 50% GET. Small key-value items show similar results in Figure 10 (middle). However, the gap between MICA and the other systems shrinks because MICA becomes network bottlenecked while the other systems never saturate the network bandwidth in our experiments. Large key-value items, shown in Figure 10 (bottom), exacerbates the network bandwidth bottleneck, further limiting MICA’s throughput. MICA achieves 12.6–14.6 Mops for 50% GET and 8.6–9.4 Mops for 95% GET; note that MICA shows high throughput with lower GET ratios, --- 6MICA clients are still allowed to use standard socket I/O in cases where the socket overhead on the client machines is acceptable because the MICA server and clients use the plain UDP protocol. which require less network bandwidth as the server can omit the key and value from the responses. Unlike MICA, however, all other systems achieve higher throughput under 95% GET than under 50% GET because these systems are bottleneck locally, not by the network bandwidth. In those measurements, MICA’s cache and store modes show only minor differences in the performance. We will refer to the cache version of MICA as MICA in the rest of the evaluation for simplicity. **Skew resistance:** Figure 11 compares the per-core throughput under uniform and skewed workloads of 50% GET with tiny items. MICA uses EREW. Several cores process more requests under the skewed workload than under the uniform workload because they process requests more efficiently. The skew in the workload increases the RX burst size of the most loaded core from 10.2 packets per I/O to 17.3 packets per I/O, reducing its per-packet I/O cost, and the higher data locality caused by the workload skew improves the average cache hit ratio of all cores from 0.9. 67.8% to 77.8%. A local benchmark in Figure 12 (without network processing) also shows that skewed workloads grant good throughput for local key-value processing due to the data locality. These results further justify the partitioned design of MICA and explains why MICA retains high throughput under skewed workloads. **Summary:** MICA’s throughput reaches 76.9 Mops, at least 4x faster than the next best system. MICA delivers consistent performance across different skewness, write-intensiveness, and key-value sizes. **5.3 Latency** To show that MICA achieves comparably low latency while providing high throughput, we compare MICA’s latency with that of the original Memcached implementation that uses the kernel network stack. To measure the end-to-end latency, clients tag each request packet with the current timestamp. When receiving responses, clients compare the current timestamp and the previous timestamp echoed back in the responses. We use uniform 50% GET workloads on tiny items. MICA uses EREW. The client varies the request rate to observe the relationship between throughput and latency. Figure 13 plots the end-to-end latency as a function of throughput; the error bars indicate 5th- and 95th-percentile latency. The original Memcached exhibits almost flat latency up to certain throughput, whereas MICA shows varied latency depending on the throughput it serves. MICA’s latency lies between 24–52 µs. At the similar latency level of 40 µs, MICA shows 69 Mops—more than two orders of magnitude faster than Memcached. Because MICA uses a single round-trip per request unlike RDMA-based systems [35], we believe that MICA provides best-in-class low-latency key-value operations. Summary: MICA achieves both high throughput and latency near the network minimum. 5.4 Scalability CPU scalability: We vary now the number of CPU cores and compare the end-to-end throughput. We allocate cores evenly to both NUMA domains so that cores can efficiently access NICs connected to their CPU socket. We use skewed workloads on tiny items because it is generally more difficult for partitioned stores to handle skewed workloads. MICA uses EREW. Figure 14 (upper) compares core scalability of systems with 50% GET. Only MICA and Masstree perform better with more cores. Memcached, MemC3, and RAMCloud scale poorly, achieving their best throughput at 2 cores. The trend continues for 95% GET requests in Figure 14 (lower); MICA and Masstree scale well as before. The rest also achieve higher throughput, but still do not scale. Note that some systems scale differently from their original papers. For example, MemC3 achieves 5.7 Mops at 4 cores, while the original paper shows 4.4 Mops at 16 cores [15]. This is because using our network stack instead of their network stack reduces I/O cost, which may expose a different bottleneck (e.g., key-value data structures) that can change the optimal number of cores for the best throughput. Network scalability: We also change the available network bandwidth by varying the number of NIC ports we use for request processing. Figure 15 shows that MICA again scales well with high network bandwidth, because MICA can use almost all available network bandwidth for request processing. The GET ratio does not affect the result for MICA significantly. This result suggests that MICA can possibly scale further with higher network bandwidth (e.g., multiple 40 Gbps NICs). MICA and Masstree achieve similar performance under the 95% GET workload when using 2 ports, but Masstree and other systems do not scale well with more ports. Summary: MICA scales well with more CPU cores and more network bandwidth, even under write-intensive workloads where other systems tend to scale worse. 5.5 Necessity of the Holistic Approach In this section, we demonstrate how each component of MICA contributes to its performance. Because MICA is a coherent system that exploits the synergy between its components, we compare different approaches for one component while keeping the other components the same. **Parallel data access:** We use end-to-end experiments to measure how different data access modes affect the system performance. We use tiny items only. Figure 16 shows the end-to-end results. EREW shows consistently good performance. CREW achieves slightly higher throughput with high GET ratios on skewed workloads compared to EREW (white bars at 95% GET) because despite the overheads from bucket version management, CREW can use multiple cores to read popular items without incurring excessive inter-core communication. While CRCW performs better than any other compared systems (Section 5.2), CRCW offers no benefit over EREW and CREW; this suggests that we should avoid CRCW. **Network stack:** As shown in Section 5.2, switching Masstree to our network stack resulted in much higher throughput (16.5 Mops without request batching) than the throughput from the original paper (8.9 Mops with request batching [30]); this indicates that our network stack provides efficient I/O for key-value processing. The next question is how important it is to use hardware to direct requests for exclusive access in MICA. To compare with MICA's client-assisted hardware request direction, we implemented software-only request direction: clients send requests to any server core in a round-robin way, and the server cores direct the received requests to the appropriate cores for EREW data access. We use Intel DPDK's queue to implement message queues between cores. We use 50% GET on tiny items. Table 1 shows that software request direction achieves only 40.0–44.1% of MICA's throughput. This is due to the inter-core communication overhead of software request direction. Thus, MICA's request direction is crucial for realizing the benefit of exclusive access. **Key-value data structures:** MICA's circular logs, lossy concurrent hash indexes, and bulk chaining permit high-speed read and write operations with simple memory management. Even CRCW, the slowest data access mode of MICA, outperforms the second best system, Masstree (Section 5.2). We also demonstrate that partitioning existing data structures does not simply grant MICA's high performance. For this, we compare MICA with “partitioned” Masstree, which uses one Masstree instance per core, with its support for concurrent access disabled in the source code. This is similar to MICA’s EREW. We also use the same partitioning and request direction scheme. Table 2 shows the result with skewed workloads on tiny items. Partitioned Masstree achieves only 8.2–27.3% of MICA’s performance, with the throughput for 50% GET even lower than non-partitioned Masstree (Section 5.2). This indicates that to make best use of MICA’s parallel data access and network stack, it is important to use key-value data structures that perform high-speed writes and to provide high efficiency with data partitioning. **In conclusion, the holistic approach is essential:** any missing component significantly degrades performance. ### 6 Related Work Most DRAM stores are not partitioned: Memcached [32], RAMCloud [37], MemC3 [15], Masstree [30], and Silo [45] all have a single partition for each server node. Masstree and Silo show that partitioning can be efficient under some workloads but is slow under workloads with a skewed key popularity and many cross-partition transactions. MICA exploits burst I/O and locality so that even in its exclusive EREW mode, loaded partitions run faster. It can do so because the simple key-value requests that it targets do not cross partitions. Partitioned systems are fast with well-partitioned data. Memcached on Tilera [6], CPHash [33], and Chronos [25] are partitioned in-memory key-value systems that exclusively access partitioned hash tables to minimize lock contention and cache movement, similar to MICA’s EREW partitions. These systems lack support for other partitioning such as MICA’s CREW that can provide higher throughput under read-intensive skewed workloads. H-Store [44] and VoltDB [46] use single-threaded execution engines that access their own partition exclusively, avoiding expensive concurrency control. Because workload skew can reduce system throughput, they require careful data partitioning, even using machine learning methods [40], and dynamic load balancing [25]. MICA achieves similar throughput under both uniform and skewed workloads without extensive partitioning and load balancing effort because MICA’s keyhash-based partitioning mitigates the skew using and its request processing for popular partitions exploits burst packet I/O and cache-friendly memory access. Several in-memory key-value systems focus on low latency request processing. RAMCloud achieves 4.9–15.3 µs end-to-end latency for small objects [1], and Chronos exhibits average latency of 10 µs and a 99th-percentile latency of 30 µs, on low latency networks such as InfiniBand and Myrinet. Pilaf [35] serves read requests using one-sided RDMA reads on a low-latency network. Our MICA prototype currently runs on 10-Gb Ethernet NIC whose base latency is much higher [16]; we plan to evaluate MICA on a low-latency network. Prior work studies providing a high performance reliable transport service using low-level unreliable datagram services. The Memcached UDP protocol relies on application-level packet loss recovery [36]. Low-overhead user-level implementations for TCP such as nTCP [24] can offer reliable communication to Memcached applications without incurring high performance penalties. Low-latency networks such as InfiniBand often implement hardware-level reliable datagrams [35]. Affinity-Accept [41] uses Flow Director on the commodity NIC hardware to load balance TCP connections across multiple CPU cores. Chronos [25] directs remote requests to server cores using client-supplied information, similar to MICA; however, Chronos uses software-based packet classification whose throughput for small key-value requests is significantly lower than MICA’s hardware-based classification. Strict or complex item eviction schemes in key-value stores can be so costly that it can reduce system throughput significantly. MemC3 [15] replaces Memcached [32]’s original LRU with a CLOCK-based approximation to avoid contention caused by LRU list management. MICA’s circular log and lossy concurrent hash index use its lossy property to support common eviction schemes at low cost; the lossy concurrent hash index is easily extended to support lossless operations by using bulk chaining. A worthwhile area of future work is applying MICA’s techniques to semantically richer systems, such as those that are durable [37], or provide range queries [13, 30] or multi-key transactions [45]. Our results show that existing systems such as Masstree can benefit considerably simply by moving to a lightweight network stack; nevertheless, operations in these systems may cross partitions, it remains to be seen how to best harness the speed of exclusively accessed partitions. 7 Conclusion MICA is an in-memory key-value store that provides high-performance, scalable key-value storage. It provides consistently high throughput and low latency for read/write-intensive workloads with a uniform/skewed key popularity. We demonstrate high-speed request processing with MICA’s parallel data access to partitioned data, efficient network stack that delivers remote requests to appropriate CPU cores, and new lossy and lossless data structures that exploit properties of key-value workloads to provide high-speed write operations without complicating memory management. Acknowledgments This work was supported by funding from the National Science Foundation under awards CCF-0964474 and CNS-1040801, Intel via the Intel Science and Technology Center for Cloud Computing (ISTC-CC), and Basic Science Research Program through the National Research Foundation of Korea funded by MSIP (NRF-2013R1A1A1076024). Hyeontaek Lim was supported in part by the Facebook Fellowship. We would like to thank Nick Feamster, John Ousterhout, Dong Zhou, Yandong Mao, Wyatt Lloyd, and our NSDI reviewers for their valuable feedback, and Prabal Dutta for shepherding this paper. References
{"Source-Url": "http://www.cs.cmu.edu/~hl/papers/mica-nsdi2014.pdf", "len_cl100k_base": 13509, "olmocr-version": "0.1.48", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 55619, "total-output-tokens": 17154, "length": "2e13", "weborganizer": {"__label__adult": 0.00041961669921875, "__label__art_design": 0.0005121231079101562, "__label__crime_law": 0.0003502368927001953, "__label__education_jobs": 0.0009217262268066406, "__label__entertainment": 0.00019276142120361328, "__label__fashion_beauty": 0.00023686885833740232, "__label__finance_business": 0.00048422813415527344, "__label__food_dining": 0.00039076805114746094, "__label__games": 0.0010814666748046875, "__label__hardware": 0.01316070556640625, "__label__health": 0.0005426406860351562, "__label__history": 0.0006184577941894531, "__label__home_hobbies": 0.00015473365783691406, "__label__industrial": 0.0009946823120117188, "__label__literature": 0.00028324127197265625, "__label__politics": 0.000331878662109375, "__label__religion": 0.0006194114685058594, "__label__science_tech": 0.40234375, "__label__social_life": 7.736682891845703e-05, "__label__software": 0.0247802734375, "__label__software_dev": 0.5498046875, "__label__sports_fitness": 0.00031685829162597656, "__label__transportation": 0.0008511543273925781, "__label__travel": 0.0002589225769042969}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70454, 0.03209]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70454, 0.28802]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70454, 0.87008]], "google_gemma-3-12b-it_contains_pii": [[0, 4654, false], [4654, 10210, null], [10210, 14178, null], [14178, 19721, null], [19721, 25215, null], [25215, 29804, null], [29804, 34611, null], [34611, 39551, null], [39551, 44271, null], [44271, 49822, null], [49822, 51830, null], [51830, 54801, null], [54801, 58870, null], [58870, 64075, null], [64075, 68217, null], [68217, 70454, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4654, true], [4654, 10210, null], [10210, 14178, null], [14178, 19721, null], [19721, 25215, null], [25215, 29804, null], [29804, 34611, null], [34611, 39551, null], [39551, 44271, null], [44271, 49822, null], [49822, 51830, null], [51830, 54801, null], [54801, 58870, null], [58870, 64075, null], [64075, 68217, null], [68217, 70454, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70454, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70454, null]], "pdf_page_numbers": [[0, 4654, 1], [4654, 10210, 2], [10210, 14178, 3], [14178, 19721, 4], [19721, 25215, 5], [25215, 29804, 6], [29804, 34611, 7], [34611, 39551, 8], [39551, 44271, 9], [44271, 49822, 10], [49822, 51830, 11], [51830, 54801, 12], [54801, 58870, 13], [58870, 64075, 14], [64075, 68217, 15], [68217, 70454, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70454, 0.01946]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
551cc35aa76c315ea4ec9bfe20677a49d3d55d92
Efficiency Readings: None The primary goal of this section is to be able to analyze the efficiency of an algorithm. Algorithms An *algorithm* is step-by-step description of *how* to solve a "problem". *Algorithms* are not restricted to computing. For example, every day you might use an algorithm to select which clothes to wear. For most of this course, the "problems" are function descriptions (*interfaces*) and we work with *implementations* of algorithms that solve those problems. The word *algorithm* is named after Muḥammad ibn Mūsā al-Khwārizmī (≈ 800 A.D.). There are many objective and subjective methods for comparing algorithms: - How easy is it to understand? - How easy is it to implement? - How accurate is it? - How robust is it? (Can it handle errors well?) - How adaptable is it? (Can it be used to solve similar problems?) - How fast (efficient) is it? In this course, we use efficiency to objectively compare algorithms. Efficiency The most common measure of efficiency is *time efficiency*, or how long it takes an algorithm to solve a problem. Unless we specify otherwise, we always mean *time efficiency*. Another efficiency measure is *space efficiency*, or how much space (memory) an algorithm requires to solve a problem. We briefly discuss space efficiency at the end of this module. The *efficiency* of an algorithm may depend on its *implementation*. To avoid any confusion, we always measure the efficiency of a *specific implementation* of an algorithm. Running time To quantify efficiency, we are interested in measuring the running time of an algorithm. What unit of measure should we use? Seconds? “My algorithm can sort one billion integers in 9.037 seconds”. - What year did you make this statement? - What machine & model did you use? (With how much RAM?) - What computer language & operating system did you use? - Was that the actual CPU time, or the total time elapsed? - How accurate is the time measurement? Is the 0.037 relevant? Measuring *running times* in seconds can be problematic. What are the alternatives? Typically, we measure the number of *elementary operations* required to solve the problem. - In C, we can count the number of operations, or in other words, the number of *operators* executed. - In Racket, we can count the total number of (substitution) *steps* required, although that can be deceiving for built-in functions†. † We revisit the issue of built-in functions later. You are not expected to count the exact number of operations. We only count operations in these notes for illustrative purposes. We introduce some simplification shortcuts soon. Data size What is the number of operations executed for this implementation? ```c int sum_array(const int a[], int len) { int sum = 0; int i = 0; while (i < len) { sum = sum + a[i]; i = i + 1; } return sum; } ``` The running time **depends on the length** of the array. If there are \( n \) items in the array, it requires \( 7n + 3 \) operations. We are always interested in the running time **with respect to** the size of the data. Traditionally, the variable $n$ is used to represent the **size** (or **length**) of the data. $m$ and $k$ are also popular when there is more than one parameter. Often, $n$ is obvious from the context, but if there is any ambiguity you should clearly state what $n$ represents. For example, with lists of strings, $n$ may represent the number of strings in the list, or it may represent the length of all of the strings in the list. The **running Time** of an implementation is a **function** of $n$ and is written as $T(n)$. There may also be another **attribute** of the data that is also important. For example, with *trees*, we use $n$ to represent the number of nodes in the tree and $h$ to represent the *height* of the tree. In advanced algorithm analysis, $n$ may represent the number of *bits* required to represent the data, or the length of the *string* necessary to describe the data. Algorithm Comparison Problem: Write a function to determine if an array of positive integers contains at least \( e \) even numbers and \( o \) odd numbers. ```c // check_array(a, len, e, o) determines if array a // contains at least e even numbers and // at least o odd numbers // requires: len > 0 // elements of a > 0 // e, o >= 0 ``` Homer and Bart are debating the best algorithm (strategy) for implementing `check_array`. Bart just wants to count the total number of odd numbers in the entire array. ```c bool bart(const int a[], int len, int e, int o) { int odd_count = 0; for (int i = 0; i < len; i = i + 1) { odd_count = odd_count + (a[i] % 2); } return (odd_count >= o) && (len - odd_count >= e); } ``` If there are \( n \) elements in the array, \( T(n) = 8n + 7 \). Remember, you are not expected to calculate this precisely. Homer is lazy, and he doesn’t want to check all of the elements in the array if he doesn’t have to. ```cpp bool homer(const int a[], int len, int e, int o) { // only loop while it's still possible while (len > 0 && e + o <= len) { if (a[len - 1] % 2 == 0) { // even case: if (e > 0) { e = e - 1; // only decrement e if e > 0 } } else if (o > 0) { o = o - 1; } if (e == 0 && o == 0) { return true; } len = len - 1; } return false; } ``` The problem with analyzing Homer’s code is that it depends not just on the length of the array, but on the contents of the array and the parameters e and o. ```c int a[10] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; // these are fast: bool fast1 = homer(a, 10, 0, 11); // false; bool fast2 = homer(a, 10, 1, 0); // true; // these are slower: bool slow1 = homer(a, 10, 5, 5); // true; bool slow2 = homer(a, 10, 6, 4); // false; ``` For Homer’s code, the best case is when it can return immediately, and the worst case is when all of the array elements are visited. For Bart’s code, the best case is the same as the worst case. \[ \begin{align*} \text{homer} & \quad T(n) = 4 \quad \text{(best case)} \\ & \quad T(n) = 17n + 1 \quad \text{(worst case)} \\ \text{bart} & \quad T(n) = 8n + 7 \quad \text{(all cases)} \end{align*} \] Which implementation is more efficient? Is it more “fair” to compare against the best case or the worst case? Worst case running time Typically, we want to be conservative (pessimistic) and use the worst case. Unless otherwise specified, the running time of an algorithm is the worst case running time. Comparing the worst case, Bart’s implementation ($8n + 7$) is more efficient than Homer’s ($17n + 1$). We may also be interested in the average case running time, but that analysis is typically much more complicated. Big O notation In practice, we are not concerned with the difference between the running times \((8n + 7)\) and \((17n + 1)\). We are interested in the order of a running time. The order is the “dominant” term in the running time without any constant coefficients. The dominant term in both \((8n + 7)\) and \((17n + 1)\) is \(n\), and so they are both “order \(n\)”. To represent orders, we use **Big O notation**. Instead of “order \(n\)”, we use \(O(n)\). We define Big O notation more formally later. The “dominant” term is the term that grows the largest when $n$ is very large ($n \to \infty$). The order is also known as the “growth rate”. In this course, we encounter only a few orders (arranged from smallest to largest): $O(1)$ $O(\log n)$ $O(n)$ $O(n \log n)$ $O(n^2)$ $O(n^3)$ $O(2^n)$ **example: orders** - $2016 = O(1)$ - $100000 + n = O(n)$ - $n + n \log n = O(n \log n)$ - $999n + 0.01n^2 = O(n^2)$ - $\frac{n(n+1)(2n+1)}{6} = O(n^3)$ - $n^3 + 2^n = O(2^n)$ When comparing algorithms, the most efficient algorithm is the one with the lowest order. For example, an $O(n \log n)$ algorithm is more efficient than an $O(n^2)$ algorithm. If two algorithms have the same order, they are considered equivalent. Both Homer’s and Bart’s implementations are $O(n)$, so they are equivalent. Big O arithmetic When *adding* two orders, the result is the largest of the two orders. - $O(\log n) + O(n) = O(n)$ - $O(1) + O(1) = O(1)$ When *multiplying* two orders, the result is the product of the two orders. - $O(\log n) \times O(n) = O(n \log n)$ - $O(1) \times O(n) = O(n)$ There is no “universally accepted” Big O notation. In many textbooks, and in this introductory course, the notation $$T(n) = 1 + 2n + 3n^2 = O(1) + O(n) + O(n^2) = O(n^2)$$ is acceptable. In other textbooks, and in other courses, this notation may be too informal. In CS 240 and CS 341 you will study orders and Big O notation much more rigourously. Algorithm analysis An important skill in Computer Science is the ability to analyze a function and determine the order of the running time. In this course, our goal is to give you experience and work toward building your intuition: ```c int sum_array(const int a[], int len) { int sum = 0; for (int i = 0; i < len; ++i) { sum += a[i]; } return sum; } ``` “Clearly, each element is visited once, so the running time of `sum_array` is $O(n)$”. Contract update You should include the **time** (efficiency) of each function that is not $O(1)$ and is not *obviously* $O(1)$. If there is any ambiguity as to how $n$ is measured, it should be specified. ```cpp // sum_array(const int a[], int len) sums the elements // of array a // time: $O(n)$, $n$ is the len of a ``` Analyzing simple functions First, consider simple functions (without recursion or iteration). ```c int max(int a, int b) { if (a > b) return a; return b; } ``` If no other functions are called, there must be a fixed number of operators. Each operator is \(O(1)\), so the running time is: \[ O(1) + O(1) + \cdots + O(1) = O(1) \] If a simple function calls other functions, its running time depends on those functions. Built-in functions Consider the following two implementations. \[ \text{(define (a-length-two? lst)} \begin{align*} &= 2 \ (\text{length lst})) \end{align*} \] \[ \text{(define (b-length-two? lst)} \begin{align*} &\ (\text{and (cons? lst) (cons? (rest lst))}) \end{align*} \] \[ &\ (\text{empty? (rest (rest lst)))}) \] The running time of \(a\) is \(O(n)\), while the running time of \(b\) is \(O(1)\). When using a function that is built-in or provided by a module (library) you should always be aware of the running time. Racket running times (numeric) When working with small integers (i.e., valid C integers), the Racket numeric functions are $O(1)$. However, because Racket can handle arbitrarily large numbers it is more complicated. For example, the running time to add two large positive integers is $O(\log n)$, where $n$ is the largest number. Racket running times (lists) Elementary list functions are $O(1)$: cons cons? empty empty? rest first second tenth List functions that process the full list are typically $O(n)$: length last reverse append Abstract list functions (e.g., map, filter) depend on the consumed function, but are $O(n)$ for straightforward $O(1)$ functions. The exception is Racket’s sort, which is $O(n \log n)$. Racket running times (equality) We can assume = (numeric equality) is $O(1)$. `symbol=?` is $O(1)$, but `string=?` is $O(n)$, where $n$ is the length of the smallest string†. Racket’s generic `equal?` is deceiving: its running time is $O(n)$, where $n$ is the “size” of the smallest argument. Because `(member e lst)` depends on `equal?`, its running time is $O(nm)$ where $n$ is the length of the `lst` and $m$ is the size of `e`. † This highlights another difference between symbols & strings. Array efficiency One of the significant differences between arrays and lists is that any element of an array can be accessed in constant time regardless of the index or the length of the array. To access the $i$-th element in an array (e.g., $a[i]$) is always $O(1)$. To access the $i$-th element in a list (e.g., list-ref) is $O(i)$. Racket has a vector data type that is very similar to arrays in C. \[ \text{(define v (vector 4 8 15 16 23 42))} \] Like C’s arrays, any element of a vector can be accessed by the vector-ref function in $O(1)$ time. Iterative analysis Iterative analysis uses summations. ```c for (i = 1; i <= n; ++i) { printf("* "); } ``` \[ T(n) = \sum_{i=1}^{n} O(1) = O(1) + \cdots + O(1) = n \times O(1) = O(n) \] Because we are primarily interested in orders, \[ \sum_{i=0}^{n-1} O(x), \quad \sum_{i=1}^{10n} O(x), \quad \text{or } \sum_{i=1}^{n/2} O(x) \] are equivalent* to \(\sum_{i=1}^{n} O(x)\). * unless \(x\) is exponential (e.g., \(O(2^i)\)). Procedure for iteration 1. Work from the *innermost* loop to the *outermost* loop 2. Determine the number of iterations in the loop (in the worst case) in relation to the size of the data \( n \) or an outer loop counter 3. Determine the running time per iteration 4. Write the summation(s) and simplify the expression \[ \sum_{i=1}^{n} O(1) = O(n) \] \[ \text{sum} = 0; \text{for}\ (i = 0; i < n; ++i) \{ \text{sum} += i; \} \] Common summations $$\sum_{i=1}^{\log n} O(1) = O(\log n)$$ $$\sum_{i=1}^{n} O(1) = O(n)$$ $$\sum_{i=1}^{n} O(n) = O(n^2)$$ $$\sum_{i=1}^{n} O(i) = O(n^2)$$ $$\sum_{i=1}^{n} O(i^2) = O(n^3)$$ The summation index should reflect the *number of iterations* in relation to the *size of the data* and does not necessarily reflect the actual loop counter values. ```c k = n; // n is size of the data while (k > 0) { printf("* "); k -= 10; } ``` There are $n/10$ iterations. Because we are only interested in the *order*, $n/10$ and $n$ are equivalent. \[ \sum_{i=1}^{n/10} O(1) = O(n) \] When the loop counter changes geometrically, the number of iterations is often logarithmic. \[ k = n; \quad \text{// n is size of the data} \] \[ \text{while } (k > 0) \{ \] \[ \quad \text{printf("*");} \] \[ \quad k /= 10; \] \[ \} \] There are \( \log_{10} n \) iterations. \[ \sum_{i=1}^{\log n} O(1) = O(\log n) \] When working with nested loops, evaluate the *innermost* loop first. ```c for (i = 0; i < n; ++i) { for (j = 0; j < i; ++j) { printf("*"); } printf("\n"); } ``` **Inner loop:** \[ \sum_{j=0}^{i-1} O(1) = O(i) \] **Outer loop:** \[ \sum_{i=0}^{n-1} (O(1) + O(i)) = O(n^2) \] Recurrence relations To determine the running time of a recursive function we must determine the *recurrence relation*. For example, \[ T(n) = O(n) + T(n - 1) \] We can then look up the recurrence relation in a table to determine the *closed-form* (non-recursive) running time. \[ T(n) = O(n) + T(n - 1) = O(n^2) \] In later courses, you *derive* the closed-form solutions and *prove* their correctness. The recurrence relations we encounter in this course are: \[ T(n) = O(1) + T(n - k_1) = O(n) \] \[ T(n) = O(n) + T(n - k_1) = O(n^2) \] \[ T(n) = O(n^2) + T(n - k_1) = O(n^3) \] \[ T(n) = O(1) + T\left(\frac{n}{k_2}\right) = O(\log n) \] \[ T(n) = O(1) + k_2 \cdot T\left(\frac{n}{k_2}\right) = O(n) \] \[ T(n) = O(n) + k_2 \cdot T\left(\frac{n}{k_2}\right) = O(n \log n) \] \[ T(n) = O(1) + T(n - k_1) + T(n - k_1') = O(2^n) \] where \(k_1, k_1' \geq 1\) and \(k_2 > 1\) This table will be provided on exams. Procedure for recursive functions 1. Identify the order of the function *excluding* any recursion 2. Determine the size of the data for the next recursive call(s) 3. Write the full *recurrence relation* (combine step 1 & 2) 4. Look up the closed-form solution in a table ``` (define (sum lon) (cond [(empty? lon) 0] [else (+ (first lon) (sum (rest lon)))])) ``` 1. non-recursive functions: $O(1)$ (*empty?, first, rest*) 2. size of the recursion: $n - 1$ (*rest lon*) 3. $T(n) = O(1) + T(n - 1)$ (combine 1 & 2) 4. $T(n) = O(n)$ (table lookup) Revisiting sorting algorithms No introduction to efficiency is complete without a discussion of sorting algorithms. For simplicity, we only consider sorting numbers. When sorting strings or large data structures, you must also include the time to compare each element. When analyzing sorting algorithms, one measure of running time is the number of comparisons. Selection sort Recall our C implementation of selection sort: ```c void selection_sort(int a[], int len) { int pos = 0; for (int i = 0; i < len - 1; ++i) { pos = i; for (int j = i + 1; j < len; ++j) { if (a[j] < a[pos]) { pos = j; } } swap(&a[i], &a[pos]); // see Section 05 } } ``` \[ T(n) = \sum_{i=1}^{n} \sum_{j=i}^{n} O(1) = O(n^2) \] Insertion sort The analysis for the worst case of insertion sort is also $O(n^2)$. ```c void insertion_sort(int a[], int len) { for (int i = 1; i < len; ++i) { for (int j = i; j > 0 && a[j - 1] > a[j]; --j) { swap(&a[j], &a[j - 1]); } } } ``` $$T(n) = \sum_{i=1}^{n} \sum_{j=1}^{i} O(1) = O(n^2)$$ However, in the best case, the array is already sorted, and the inner loop terminates immediately. This best case running time is $O(n)$. Quick sort In our C implementation of quick sort, we: 1. select the first element of the array as our “pivot”. $O(1)$ 2. move all elements that are larger than the pivot to the back of the array. $O(n)$. 3. move (“swap”) the pivot into the correct position. $O(1)$. 4. recursively sort the “smaller than” sub-array and the “larger than” sub-array. $T(?)$ The analysis of step 4 is a little trickier. When the pivot is in “the middle” it splits the sublists equally, so \[ T(n) = O(n) + 2T\left(\frac{n}{2}\right) = O(n \log n) \] But that is the best case. In the worst case, the “pivot” is the smallest (or largest element), so one of the sublists is empty and the other is of size \((n - 1)\). \[ T(n) = O(n) + T(n - 1) = O(n^2) \] Despite its worst case behaviour, quick sort is still popular and in widespread use. The average case behaviour is quite good and there are straightforward methods that can be used to improve the selection of the pivot. It is part of the C standard library (see Section 12). Sorting summary <table> <thead> <tr> <th>Algorithm</th> <th>best case</th> <th>worst case</th> </tr> </thead> <tbody> <tr> <td>selection sort</td> <td>$O(n^2)$</td> <td>$O(n^2)$</td> </tr> <tr> <td>insertion sort</td> <td>$O(n)$</td> <td>$O(n^2)$</td> </tr> <tr> <td>quick sort</td> <td>$O(n \log n)$</td> <td>$O(n^2)$</td> </tr> </tbody> </table> From this table, it might appear that insertion sort is the best choice. However, as mentioned with quick sort, the “typical” or “average” case for quick sort is much better than insertion sort. In Section 10, we will see merge sort, which is $O(n \log n)$ in the worst case. Binary search In Section 07, we implemented binary search on a sorted array. ```c int find_sorted(int item, const int a[], int len) { // ... while (low <= high) { mid = (low + high) / 2; // ... if (a[mid] < item) { low = mid + 1; } else { high = mid - 1; } //... } ``` In each iteration, the size of the search range \( n = \text{high} - \text{low} \) was halved, so the running time is: \[ T(n) = \sum_{i=1}^{\log_2 n} O(1) = O(\log n) \] Algorithm Design In this introductory course, the algorithms we develop are mostly straightforward. To provide some insight into *algorithm design*, we introduce a problem that is simple to describe, but hard to solve efficiently. We present four different algorithms to solve this problem, each with a different running time. The maximum subarray problem Problem: Given an array of integers, find the **maximum sum** of any **contiguous** sequence (subarray) of elements. For example, for the following array: | 31 | -41 | 59 | 26 | -53 | 58 | 97 | -93 | -23 | 84 | the maximum sum is 187: | 31 | -41 | 59 | 26 | -53 | 58 | 97 | -93 | -23 | 84 | This problem has many applications, including **pattern recognition** in **artificial intelligence**. Solution A: $O(n^3)$ // for every start position i and ending position j // loop between them (k) summing elements int max_subarray(const int a[], int len) { int maxsofar = 0; int sum = 0; for (i = 0; i < len; ++i) { for (j = i; j < len; ++j) { sum = 0; for (k = i; k <= j; ++k) { sum += a[k]; } maxsofar = max(maxsofar, sum); } } return maxsofar; } $T(n) = \sum_{i=1}^{n} \sum_{j=i}^{n} \sum_{k=i}^{j} O(1) = O(n^3)$ Solution B: $O(n^2)$ // for every start position i, // check if the sum from i...j is the max ```c int max_subarray(const int a[], int len) { int maxsofar = 0; int sum = 0; for (i = 0; i < len; ++i) { sum = 0; for (j = i; j < len; ++j) { sum += a[j]; maxsofar = max(maxsofar, sum); } } return maxsofar; } ``` \[ T(n) = \sum_{i=1}^{n} \sum_{j=i}^{n} O(1) = O(n^2) \] Solution C: $O(n \log n)$ We only describe this recursive *divide and conquer* approach. 1. Find the midpoint position $m$. $O(1)$ 2. Find (a) the maximum subarray from $(0...m-1)$, and (b) the maximum subarray from $(m+1...\text{len}-1)$. $2T(n/2)$ 3. Find (c) the maximum subarray that includes $m$. $O(n)$ 4. Find the maximum of (a), (b) and (c). $O(1)$ $T(n) = O(n) + 2T(n/2) = O(n \log n)$ Solution D: $O(n)$ // for each position i, keep track of // the maximum subarray ending at i int max_subarray(const int a[], int len) { int maxsofar = 0; int maxendhere = 0; for (i = 0; i < len; ++i) { maxendhere = max(maxendhere + a[i], 0); maxsofar = max(maxsofar, maxendhere); } return maxsofar; } Space complexity The *space complexity* of an algorithm is the amount of additional memory that the algorithm requires to solve the problem. While we are mostly interested in *time complexity*, there are circumstances where space is more important. If two algorithms have the same time complexity but different space complexity, it is likely that the one with the lower space complexity is faster. Consider the following two Racket implementations of a function to sum a list of numbers. ``` (define (sum lst) (cond [(empty? lst) 0] [else (+ (first lst) (sum (rest lst)))])) (define (asum lst) (define (asum/acc lst sofar) (cond [(empty? lst) sofar] [else (asum/acc (rest lst) (+ (first lst) sofar)])]) (asum/acc lst 0)) ``` Both functions return the same result and both functions have a time complexity $T(n) = O(n)$. The significant difference is that `asum` uses accumulative recursion. If we examine the substitution steps of `sum` and `asum`, we get some insight into their differences. ``` (sum '(1 2 3)) => (+ 1 (sum '(2 3))) => (+ 1 (+ 2 (sum '(3)))) => (+ 1 (+ 2 (+ 3 (sum empty)))) => (+ 1 (+ 2 (+ 3 0))) => (+ 1 (+ 2 3)) => (+ 1 5) => 6 ``` ``` (asum '(1 2 3)) => (asum/acc '(1 2 3) 0) => (asum/acc '(2 3) 1) => (asum/acc '(3) 3) => (asum/acc empty 6) => 6 ``` The `sum` expression “grows” to $O(n)$ ’s, but the `asum` expression does not use any additional space. The measured run-time of `asum` is *significantly* faster than `sum` (in an experiment with a list of one million 1’s, over 40 times faster). `sum` uses $O(n)$ space, whereas `asum` uses $O(1)$ space. But **both** functions make the **same** number of recursive calls, how is this explained? The difference is that `asum` uses **tail recursion**. A function is **tail recursive** if the recursive call is always the **last expression** to be evaluated (the “tail”). Typically, this is achieved by using accumulative recursion and providing a partial result as one of the parameters. With tail recursion, the previous stack frame can be **reused** for the next recursion (or the previous frame can be discarded before the new stack frame is created). Tail recursion is more space efficient and avoids stack overflow. Many modern C compilers detect and take advantage of tail recursion. Big O revisited We now revisit *Big O notation* and define it more formally. \[ O(g(n)) \text{ is the set of all functions whose “order” is less than or equal to } g(n). \] \[ \begin{align*} n^2 & \in O(n^{100}) \\ n^3 & \in O(2^n) \end{align*} \] While you can say that \( n^2 \) is in the set \( O(n^{100}) \), it’s not very useful information. In this course, we always want the **most appropriate** order, or in other words, the **smallest** correct order. Big O describes the *asymptotic* behaviour of a function. This is **different** than describing the **worst case** behaviour of an algorithm. Many confuse these two topics but they are completely **separate concepts**. You can asymptotically define the best case and the worst case behaviour of an algorithm. For example, the best case insertion sort is $O(n)$, while the worst case is $O(n^2)$. A slightly more formal definition of Big O is \[ f(n) \in O(g(n)) \iff f(n) \leq c \cdot g(n) \] for large \( n \) and some positive number \( c \). This definition makes it clear why we “ignore” constant coefficients. For example, \[ 9n \in O(n) \quad \text{for } c = 10, \quad 9n \leq 10n, \text{ and} \] \[ 0.01n^3 + 1000n^2 \in O(n^3) \quad \text{for } c = 1001, \quad 0.01n^3 + 1000n^2 \leq 1001n^3 \] The full definition of Big O is \[ f(n) \in O(g(n)) \iff \exists c, n_0 > 0, \forall n \geq n_0, f(n) \leq c \cdot g(n) \] \( f(n) \) is in \( O(g(n)) \) if there exists a positive \( c \) and \( n_0 \) such that for any value of \( n \geq n_0 \), \( f(n) \leq c \cdot g(n) \). In later CS courses, you will use the formal definition of Big O to prove algorithm behaviour more rigourously. There are other asymptotic functions in addition to Big O. (for each of the following, \( \exists n_0 > 0, \forall n \geq n_0 \ldots \)) \[ \begin{align*} f(n) \in \omega(g(n)) &\iff \forall c > 0, c \cdot g(n) \leq f(n) \\ f(n) \in \Omega(g(n)) &\iff \exists c > 0, c \cdot g(n) \leq f(n) \\ f(n) \in \Theta(g(n)) &\iff \exists c_1, c_2 > 0, c_1 \cdot g(n) \leq f(n) \leq c_2 \cdot g(n) \\ f(n) \in O(g(n)) &\iff \exists c > 0, f(n) \leq c \cdot g(n) \\ f(n) \in o(g(n)) &\iff \forall c > 0, f(n) \leq c \cdot g(n) \end{align*} \] \(O(g(n))\) is often used when \( \Theta(g(n)) \) is more appropriate. Goals of this Section At the end of this section, you should be able to: - use the new terminology introduced (e.g., algorithm, time efficiency, running time, order) - compute the order of an expression - explain and demonstrate the use of Big O notation and how \( n \) is used to represent the size of the data - determine the “worst case” running time for a given implementation • deduce the running time for many built-in functions • analyze a recursive function, determine its recurrence relation and look up its closed-form running time in a provided lookup table • analyze an iterative function and determine its running time • explain and demonstrate the use of the four sorting algorithms presented • analyze your own code to ensure it achieves a desired running time • describe the formal definition of Big O notation and its asymptotic behaviour • explain space complexity, and how it relates to tail recursion • use running times in your contracts
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs136/handouts/08-efficiency-post.pdf", "len_cl100k_base": 8386, "olmocr-version": "0.1.50", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 102647, "total-output-tokens": 11182, "length": "2e13", "weborganizer": {"__label__adult": 0.00043129920959472656, "__label__art_design": 0.0004038810729980469, "__label__crime_law": 0.00042557716369628906, "__label__education_jobs": 0.003602981567382813, "__label__entertainment": 9.745359420776369e-05, "__label__fashion_beauty": 0.00019681453704833984, "__label__finance_business": 0.00018453598022460935, "__label__food_dining": 0.0006113052368164062, "__label__games": 0.00130462646484375, "__label__hardware": 0.0012054443359375, "__label__health": 0.0006375312805175781, "__label__history": 0.0003464221954345703, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0005240440368652344, "__label__literature": 0.00039887428283691406, "__label__politics": 0.0002987384796142578, "__label__religion": 0.0007009506225585938, "__label__science_tech": 0.0230255126953125, "__label__social_life": 0.00013530254364013672, "__label__software": 0.003986358642578125, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.0004503726959228515, "__label__transportation": 0.000759124755859375, "__label__travel": 0.0002532005310058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 27289, 0.02192]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 27289, 0.84469]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 27289, 0.80361]], "google_gemma-3-12b-it_contains_pii": [[0, 118, false], [118, 574, null], [574, 950, null], [950, 1497, null], [1497, 1988, null], [1988, 2456, null], [2456, 2636, null], [2636, 3112, null], [3112, 3643, null], [3643, 4016, null], [4016, 4466, null], [4466, 4903, null], [4903, 5468, null], [5468, 5911, null], [5911, 6423, null], [6423, 6837, null], [6837, 7349, null], [7349, 7828, null], [7828, 8154, null], [8154, 8441, null], [8441, 8796, null], [8796, 9266, null], [9266, 9591, null], [9591, 10022, null], [10022, 10553, null], [10553, 10886, null], [10886, 11283, null], [11283, 11785, null], [11785, 12342, null], [12342, 12777, null], [12777, 13212, null], [13212, 13408, null], [13408, 13809, null], [13809, 14131, null], [14131, 14430, null], [14430, 14839, null], [14839, 15358, null], [15358, 15919, null], [15919, 16285, null], [16285, 16712, null], [16712, 17189, null], [17189, 17595, null], [17595, 18209, null], [18209, 18732, null], [18732, 19252, null], [19252, 19582, null], [19582, 20010, null], [20010, 20530, null], [20530, 20967, null], [20967, 21371, null], [21371, 21710, null], [21710, 22111, null], [22111, 22663, null], [22663, 23152, null], [23152, 23502, null], [23502, 24044, null], [24044, 24510, null], [24510, 24910, null], [24910, 25323, null], [25323, 25603, null], [25603, 26322, null], [26322, 26706, null], [26706, 27035, null], [27035, 27289, null]], "google_gemma-3-12b-it_is_public_document": [[0, 118, true], [118, 574, null], [574, 950, null], [950, 1497, null], [1497, 1988, null], [1988, 2456, null], [2456, 2636, null], [2636, 3112, null], [3112, 3643, null], [3643, 4016, null], [4016, 4466, null], [4466, 4903, null], [4903, 5468, null], [5468, 5911, null], [5911, 6423, null], [6423, 6837, null], [6837, 7349, null], [7349, 7828, null], [7828, 8154, null], [8154, 8441, null], [8441, 8796, null], [8796, 9266, null], [9266, 9591, null], [9591, 10022, null], [10022, 10553, null], [10553, 10886, null], [10886, 11283, null], [11283, 11785, null], [11785, 12342, null], [12342, 12777, null], [12777, 13212, null], [13212, 13408, null], [13408, 13809, null], [13809, 14131, null], [14131, 14430, null], [14430, 14839, null], [14839, 15358, null], [15358, 15919, null], [15919, 16285, null], [16285, 16712, null], [16712, 17189, null], [17189, 17595, null], [17595, 18209, null], [18209, 18732, null], [18732, 19252, null], [19252, 19582, null], [19582, 20010, null], [20010, 20530, null], [20530, 20967, null], [20967, 21371, null], [21371, 21710, null], [21710, 22111, null], [22111, 22663, null], [22663, 23152, null], [23152, 23502, null], [23502, 24044, null], [24044, 24510, null], [24510, 24910, null], [24910, 25323, null], [25323, 25603, null], [25603, 26322, null], [26322, 26706, null], [26706, 27035, null], [27035, 27289, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 27289, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 27289, null]], "pdf_page_numbers": [[0, 118, 1], [118, 574, 2], [574, 950, 3], [950, 1497, 4], [1497, 1988, 5], [1988, 2456, 6], [2456, 2636, 7], [2636, 3112, 8], [3112, 3643, 9], [3643, 4016, 10], [4016, 4466, 11], [4466, 4903, 12], [4903, 5468, 13], [5468, 5911, 14], [5911, 6423, 15], [6423, 6837, 16], [6837, 7349, 17], [7349, 7828, 18], [7828, 8154, 19], [8154, 8441, 20], [8441, 8796, 21], [8796, 9266, 22], [9266, 9591, 23], [9591, 10022, 24], [10022, 10553, 25], [10553, 10886, 26], [10886, 11283, 27], [11283, 11785, 28], [11785, 12342, 29], [12342, 12777, 30], [12777, 13212, 31], [13212, 13408, 32], [13408, 13809, 33], [13809, 14131, 34], [14131, 14430, 35], [14430, 14839, 36], [14839, 15358, 37], [15358, 15919, 38], [15919, 16285, 39], [16285, 16712, 40], [16712, 17189, 41], [17189, 17595, 42], [17595, 18209, 43], [18209, 18732, 44], [18732, 19252, 45], [19252, 19582, 46], [19582, 20010, 47], [20010, 20530, 48], [20530, 20967, 49], [20967, 21371, 50], [21371, 21710, 51], [21710, 22111, 52], [22111, 22663, 53], [22663, 23152, 54], [23152, 23502, 55], [23502, 24044, 56], [24044, 24510, 57], [24510, 24910, 58], [24910, 25323, 59], [25323, 25603, 60], [25603, 26322, 61], [26322, 26706, 62], [26706, 27035, 63], [27035, 27289, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 27289, 0.01161]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
59a20ddf47083c302a323e0ff4b103d66591362d
Effective Interactive Proofs for Higher-Order Imperative Programs The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. | Published Version | http://portal.acm.org/citation.cfm?id=1596550.1596565&coll=GUIDE&dl=GUIDE&type=series&idx=SERIES824&part=series&WantType=Proceedings&title=ICFP&CFID=83733044&CFTOKEN=93852205 | | Accessed | November 28, 2017 10:01:27 PM EST | | Citable Link | http://nrs.harvard.edu/urn-3:HUL.InstRepos:4686803 | | Terms of Use | This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Open Access Policy Articles, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#OAP | (Article begins on next page) Effective Interactive Proofs for Higher-Order Imperative Programs Adam Chlipala Gregory Malecha Greg Morrisett Avraham Shinnar Ryan Wisnesky Harvard University, Cambridge, MA, USA {adamic, gmalecha, greg, shinnar, ryan}@cs.harvard.edu Abstract We present a new approach for constructing and verifying higher-order, imperative programs using the Coq proof assistant. We build on the past work on the Ynot system, which is based on Hoare Type Theory. That original system was a proof of concept, where every program verification was accomplished via laborious manual proofs, with much code devoted to uninteresting low-level details. In this paper, we present a re-implementation of Ynot which makes it possible to implement fully-verified, higher-order imperative programs with reasonable proof burden. At the same time, our new system is implemented entirely in Coq source files, showcasing the versatility of that proof assistant as a platform for research on language design and verification. Both versions of the system have been evaluated with case studies in the verification of imperative data structures, such as hash tables with higher-order iterators. The verification burden in our new system is reduced by at least an order of magnitude compared to the old system, by replacing manual proof with automation. The core of the automation is a simplification procedure for implications in higher-order separation logic, with hooks that allow programmers to add domain-specific simplification rules. We argue for the effectiveness of our infrastructure by verifying a number of data structures and a packrat parser, and we compare to similar efforts within other projects. Compared to competing approaches to data structure verification, our system includes much less code that must be trusted; namely, about a hundred lines of Coq code defining a program logic. All of our theorems and decision procedures have or build machine-checkable correctness proofs from first principles, removing opportunities for tool bugs to create faulty verifications. Categories and Subject Descriptors F.3.1 [Logics and meanings of programs]: Mechanical verification; D.2.4 [Software Engineering]: Correctness proofs, formal methods, reliability General Terms Languages, Verification Keywords functional programming, interactive proof assistants, dependent types, separation logic 1. Introduction A key goal of type systems is to prevent "bad states" from arising in the execution of programs. However, today's widely-used type systems lack the expressiveness needed to catch language-level errors, such as a null-pointer dereference or an out-of-bounds array index, let alone library- and application-specific errors such as removing an element from an empty queue, failing to maintain the invariants of a balanced tree, or forgetting to release a critical resource such as a database connection. For safety- and security-critical code, a type system should ideally let the programmer assign types to libraries such that client code cannot suffer from these problems, and, in the limit, the type system should make it possible for programmers to verify that their code is correct. There are many recent attempts to extend the scope of type systems to address a wider range of safety properties. Representative examples include ESC/Java (Flanagan et al. 2002), Epigram (McBride and McKinna 2004), Spec# (Barnett et al. 2004), ATS (Chen and Xi 2005), Concoqtion (Pasalic et al. 2007), Sage (Gronski et al. 2006), Agda (Norell 2007), and Ynot (Nanevski et al. 2008). Each of these systems integrates some form of specification logic into the type system in order to rule out a wider range of truly bad states. However, in the case of ESC/Java, Spec#, and Sage, the program logic is too weak to support full verification because these systems rely completely upon provers to discharge verification conditions automatically. While there have been great advances in the performance of automated provers, in practice, they can only handle relatively shallow fragments of first-order logic. Thus, programmers are frustrated when correct code is rejected by the type-checker. For example, none of these systems is able to prove that an array index is in bounds when the constraints step outside quantifier-free linear arithmetic. In contrast, Agda, ATS, Concoqion, Epigram, and Ynot use powerful, higher-order logics that support a much wider range of policies including (partial) correctness. Furthermore, in the case of Ynot, programmers can define and use connectives in the style of separation logic (Reynolds 2002) to achieve simple, modular specifications of higher-order imperative programs. For example, a recent paper (Nanevski et al. 2008) coauthored by some of the present authors describes how Ynot was used to construct fully-verified implementations of data structures such as queues, hash tables, and splay trees, including support for higher-order iterators that take effectful functions as arguments. The price paid for these more powerful type systems is that, in general, programmers must provide explicit proofs to convince the type-checker that code is correct. Unfortunately, explicit proofs can be quite large when compared to the code. For example, in the Ynot code implementing deque for imperative queues, only 7 lines of program code are required, whereas the proof of correctness is about 70 lines. This paper reports our experience re-designing and re-implementing Ynot to dramatically reduce the burden of writing and maintaining the necessary proofs for full verification. Like the original Ynot, our system is based on the ideas of Hoare Type Theory (Nanevski et al. 2006) and is realized as an axiomatic extension of the Coq proof assistant (Bertot and Castéran 2004). This allows us to inherit the full power of Coq’s dependent types for writing code, specifications, and proofs, and it allows us to use Coq’s facility for extraction to executable ML code. However, unlike in the previous version, we have taken advantage of Coq’s tactic language, Ltac, to implement a set of parameterized procedures for automatically discharging, or at least simplifying, the separation logic-style verification conditions. The careful design of these procedures makes it possible for programmers to teach the prover about new domains as they arise. We describe this new implementation of Ynot and report on our experience implementing and verifying various imperative data structures including stacks, queues, hash tables, binomial trees, and binary search trees. When compared with the previous version of Ynot, we observe roughly an order of magnitude reduction in proof size. In most cases, to realize automation, programmers need only prove key lemmas regarding the abstractions used in their interfaces and plug these lemmas into our extensible tactics. Additionally, we show that the tactics used to generate the proofs are robust to small changes in the code or specifications. In the next section, we introduce the new Ynot in tutorial style. Next, we describe the automation tactics that we built, report on further evaluation of our system via case studies, compare with related work, and conclude. 1.1 Coq as an Extensible Automated Theorem Prover Almost everyone familiar with Coq associates it with a particular style of proof development, which might be called the “video game” approach, after a comment by Xavier Leroy. A theorem is proved in many steps of manual interaction, where Coq tells the user which goals remain to be proved, the user enters a short command that simplifies the current goal somewhat, and the process repeats until no goals remain. One of our ancillary aims in this paper is to expose a broad audience to a more effective proof style. Coq provides very good support for fully automatic proving, via its domain-specific programming language Ltac (Delahaye 2000). This support can be mixed-and-matched with more manual proving, and it is usually the case that a well-written development starts out more manual and gradually transforms to a final form where no sequential proof steps are spelled out beyond which induction principle to use. Proof scripts of that kind often adapt without change to alterations in specifications and implementations. We believe that awareness of this style is one of the crucial missing pieces blocking widespread use of proof assistants. We hope that the reader will agree that some of the examples that follow provide evidence that, for programmers with a few years of training using proof assistants, imperative programming with correctness verification need not be much harder than programming in Haskell. 2. The Ynot Programming Environment To a first approximation, Coq can be thought of as a functional programming language like Haskell or ML, but with support for dependent types. For instance, one can have operations with types such as: \[ \text{div} : \text{nat} \rightarrow \text{forall } n : \text{nat}, n \neq 0 \rightarrow \text{nat} \] which uses dependency to capture the fact that \(\text{div} \) can only be called when a proof can be supplied that the second argument is non-zero. One can also write functions such as: Definition avg (x:list nat) : nat := let sum := fold plus 0 x in let len := length x in match eq_nat_dec len 0 with | inl(pf1: len = 0) => 0 | inr(pf2: len <> 0) => div sum len pf2 end. This function averages the values in a list of natural numbers. It has a normal type like you might find in ML, and its implementation begins in an ML-like way, using a higher-order fold function. The interesting part is the match expression. We match on the result of a call to \(\text{eq_nat_dec} \), a dependently-typed natural number comparison function. This function returns a sum type with an equality proof in one branch and an inequality proof in the other. We bind a name for each proof explicitly in the pattern for each match case. The proof that \(\text{len} \) is not zero is passed to \(\text{div} \) to justify the safety of the operation. All Coq functions have to be pure—terminating without side effects. This is necessary to ensure that proofs really are proofs, with no spurious invalid “proofs by infinite loop.” Ynot extends Coq with support for side-effecting computations. Similarly to Haskell, we introduce a monadic type constructor \(\text{ST} \) which describes computations that might diverge and that might have side effects, but that, if they do return, return values of type \(T \). The \(\text{ST} \) type family provides a safe way to keep the effectful computations separate from the pure computations. Unlike Haskell’s IO monad, the \(\text{ST} \) type family is parameterized by a pre- and post-condition, which can be used to describe the effects of the computation on a mutable store. Alternatively, one can think of the axiomatic base of Ynot as a fairly standard Hoare logic. The main difference of our logic from usual presentations is that it is designed to integrate well with Coq’s functional programming language. Therefore, instead of defining a language of commands, we formalize a language of expressions in the style of the IO monad. A program derivation is of the form \(\{P\} e \in \{Q\} \), where \(P \) is a pre-condition predicate over heaps, and \(Q \) is a post-condition predicate over an initial heap, the value that results from evaluating \(e \), and a final heap. For instance, where we write \(\text{sel} \) and \(\text{upd} \) for the heap selection and update operations used in the ESC tools (Flanagan et al. 2002), we can derive the following facts, where \(p_1 \) and \(p_2 \) are pointer variables bound outside of the commands that we are verifying. \[ \{\lambda h. \top \} \text{return}(1) \{\lambda h, v, h'. h' = h \land v = 1\} \] and \[ \{\lambda h. \text{sel}(h, p_1) = p_2\} \] \[ x \leftarrow \text{pq}; x := 1 \] \[ \{\lambda h, h'. \text{sel}(h, p_1) = p_2 \land h' = \text{upd}(h, p_2, 1)\} \] Unlike other systems, Ynot does not distinguish between programs and derivations. Rather, the two are combined into one dependent type family, whose indices give the specifications of programs. For instance, the type of the “return” example would be: \[ \text{ST} \text{(fun } _\_ \Rightarrow \text{True)} \text{(fun } v \text{ h'} \Rightarrow h' = h \land v = 1\} \] Heaps are represented as functions from pointers to dynamically-typed packages, which are easy to implement in Coq with an inductive type definition. The pointer read rule enforces that the heap value being read has the type that the code expects. The original Ynot paper (Nanevski et al. 2008) contains further details of the base program logic. 2.1 A Derived Separation Logic Direct reasoning about heaps leads to very cumbersome proof obligations, with many sub-proofs that pairs of pointers are not equal. Separation logic (Reynolds 2002) is the standard tool for reducing that complexity. The previous Ynot system built a separation logic on top of the axiomatic foundation, and we do the same here. We introduce no new inductive type of separation logic formulas. Instead, we define functions that operate on arbitrary predicates over heaps, with the intention that we will only apply these functions on separation-style formulas. Nonetheless, it can be helpful to think of our assertion language as defined by: \[ P := \phi \mid x \mapsto y \mid P \circ P \mid \exists x \cdot P \] For any pure Coq proposition \( \phi \), \( \{ \phi \} \) is the heap predicate that asserts that \( \phi \) is true and the heap is empty. We write \( \text{emp} \) as an abbreviation for \( \{ \text{True} \} \), which asserts only that the heap is empty. \( x \mapsto y \) asserts that the heap contains only a mapping from \( x \) to \( y \). \( P_1 \circ P_2 \) asserts that the heap can be broken into two heaps \( h_1 \) and \( h_2 \) with disjoint domains, such that \( h_1 \) satisfies \( P_1 \) and \( h_2 \) satisfies \( P_2 \). The final clause provides existential quantification. The embedding in Coq provides much more expressive formulas than in most systems based on separation logic. Not only can any pure proposition be injected with \( \lambda \), but we can also use arbitrary Coq computation to build impure assertions. For instance, we can model deterministic disjunction with pattern-matching on values of algebraic datatypes, and we can include calls to custom recursive functions that return assertions. We need no special support in the assertion language to accommodate this, and Coq’s theorem-proving support for reasoning about pattern-matching recursive functions can be used without modification. If we had defined an inductive type of specifications, we would need to encode most of the relevant Coq features explicitly. For instance, to allow pattern matching that produces specifications, our inductive type would need a constructor standing for dependent pattern matching, which is quite a tall order on its own. Perhaps surprisingly, we have met with general success in implementing realistic examples using just these connectives. Standard uses of other connectives can often be replaced by uses of higher-order features, and the connectives that we use are particularly amenable to automation. In Section 2.2, we try to give a flavor of how to encode disjunction, in the context of a particular example. Fully-automated systems like Smallfoot (Berdine et al. 2005) build in restrictions similar to ours, but it surprised us that we needed little more to do full correctness verification. 2.1.1 The Importance of Computational Irrelevance What we have described so far is the same as in the original Ynot work. The primary departure of our new system is that we use a more standard separation logic. The old Ynot separation logic used binary post-conditions that may refer to both the initial and final heaps. (In both systems, specifications may refer to computation result values, so we avoid counting those in distinguishing between “unary” and “binary” post-conditions.) This is in stark contrast to traditional separation logics, where all assertions are separation formulas over a single heap, and all verification proof obligations are implications between such assertions. The utility of this formalism has been born out in the wealth of tools that have used separation logic for automated verification. In contrast, proofs of the binary post-conditions in the old Ynot tended to involve at least tens of steps of manual proof per line of program code. Today, even pencil-and-paper proofs about relationships between multiple heaps can draw on no logical formalism that comes close to separation logic in crispness or extent of empirical validation. While binary post-conditions are strictly more expressive than unary post-conditions, the separation logic community has developed standard techniques for mitigating the problem. To make up for this lost expressiveness, we need, in effect, to move to a richer base logic. The key addition that lets us use a more standard formulation is the feature of computationally-irrelevant variables, which correspond to specification variables (also known as “ghost variables”) in standard separation logic. Such variables may be mentioned in assertions and proofs only, and an implementation must enforce that they are not used in actual computation. Coq,” a system based on the Implicit Calculus of Constructions (Barras and Bernardo 2008), supports this feature natively. From a theoretical standpoint, it would be cleanest to implement Ynot as a Coq” library. However, in implementing the original Ynot system, we hesitated to switch to this nonstandard branch of the Coq development tree. In designing the new system, we felt the same trepidation, since we might encounter difficulties using libraries written for the standard Coq system, and the users of our library would need to install an unusual version of Coq. We hope that, in the long term, the new Coq” features will become part of the standard Coq distribution. For now, we use an encoding of computationally-irrelevant variables that is effective in standard Coq, modullo some caveats that we discuss below. Our reimplementation employs the trick of representing specification variables in types that are marked as “proofs” instead of “programs,” such that we can take advantage of Coq’s standard restrictions on “information flow” from proofs to programs. Concretely, the Coq standard library has for some time contained a type family called inhabited, defined by: \[ \text{Inductive inhabited } (A: \text{Type}) : \text{Prop} := \text{inhabited} : A \rightarrow \text{inhabited } A. \] This code demonstrates Coq’s standard syntax for inductive type definitions, which is quite similar to the syntax for algebraic datatype definitions in ML and Haskell. This type family has one parameter \( A \) of type Type, which can be thought of as the type of all types\(^1\). The constructor \( \text{inhabits} \) lets us inject any value into inhabited. While the original value may have an arbitrary type, the inhabited package has a type in the universe Prop, the universe of logical propositions. Terms whose types are in this universe are considered to be proofs and are erased by program extraction. We will see in the following examples how this encoding necessitates some mildly cumbersome notation around uses of irrelevant variables. Further, to reason effectively about irrelevant variables, we need to assert without proof an axiom stating that the constructor \( \text{inhabits} \) is injective. Axiom pack_injective : forall (T : Set) (x y : T), \( \text{inhabits} \; x \rightarrow \text{inhabits} \; y \rightarrow x = y \). Our library additionally assumes the standard axiom of function extensionality (“functions are equal if they agree at all inputs”) and the very technical “unicity of equality proofs” axiom that is included in Coq’s standard library. This pair of axioms has been proved consistent for Coq’s logic, and we could avoid appealing to extensionality at the cost of more proving work in the library, by formalizing heaps as lists instead of functions. Such a change would be invisible to most users of the library, who only need to use standard theorems proved about the heap model. However, the pack injectivity axiom contradicts the axiom of proof irrelevance (which we do not use in any of our developments, but which is popular among Coq users), and it is an open question in the Coq community whether this axiom is consistent with Coq’s logic even by itself. Past work built a denotational model for Ynot minus this feature (Petersen et al. 2008), and the architects of that model are now considering how to add irrelevance, which would complete the foundational story for the framework that we use in this paper. We hope that the experiences we report here can help to justify the inclusion of irrelevance as a core Coq feature. 2.1.2 The Rules of the Separation Logic Figure 1 presents the main rules of our separation logic. The notable divergence from common formulations is in the use of existential quantifiers in the rules for freeing, reading, and writing. These differences make sense because Ynot is implemented within a constructive logic. Coq’s constructivity is inspired by the Curry-Howard isomorphism, where programs and proofs can be encoded in the same syntactic class. A more standard, classical separation logic would probably require that, in the rule for free, the value \( v \) pointed to by \( p \) be provided as an argument to the proof rule. In constructive logic, such a value can only be produced when it can be computed by an algorithm, just as a functional program may only refer to a value that it has said how to compute. Additionally, we would not be able to use any facts implied by the current heap assertion to build one of these rule witnesses, and perhaps the witness can only be proved to exist using such facts. The explicit existential quantifier frees us to reason inside the assertion language in finding the witness. Because it uses quantification in this way, the “read” rule must also take a kind of explicit framing condition. This condition is parameterized by the value being read from the heap, making it a kind of description of the neighborhood around that value in the heap. More standard separation logics force the exact value being --- Module Type STACK. Parameter t : Set -> Set. Parameter rep (T : Set) : t T -> list T -> hprop. Parameter new T : Cmd emp (fun s : t T => rep s nil). Parameter free T (s : t T) : Cmd (rep s nil) (fun _ : unit => emp). Parameter push T (s : t T) (x : T) : \( \text{Cmd} (ls : \text{list} \; T) : \text{Cmd} (ls :: \text{rep} \; s \; \text{ls}) \) \( \text{fun} \; _ : \text{unit} \rightarrow \text{ls} \rightarrow \text{rep} \; s \; (x :: ls) \). Parameter pop T (s : t T) (ls : \text{list} \; T) : \( \text{Cmd} (\text{ls} \rightarrow \text{rep} \; s \; \text{ls}) \) \( \text{fun} \; xo : \text{option} \; T \rightarrow \text{ls} \rightarrow \text{match} \; xo \; \text{with} \) | None => \( \text{ls} = \text{nil} \) * rep s ls | Some x => Exists ls' : \text{list} \; T, \[ \text{ls} = x :: ls' \] \* rep s ls'. End STACK. --- Figure 2. The signature of an imperative stack module read to be presented as an argument to the proof rule, but here we want to allow verification of programs where the exact value to read cannot be computed from the pieces of pure data that are in scope. We want to emphasize that the changes we have made in the Ynot separation logic have no effect on the theory behind the systems. In both the old and new systems, a separation logic is defined on top of the base Hoare logic with binary post-conditions, introducing no new axioms. Here, we use the same base logic as in the past work, so the past investigations into its metatheory (Petersen et al. 2008) continue to apply. The sole metatheoretical wrinkle is the one which we discussed above, involving computational irrelevance, which is orthogonal to program logic rules. In the rest of this section, we will introduce the Ynot programming environment more concretely, via several examples of verified data structure implementations. 2.2 Verifying an Implementation of Imperative Stacks Figure 2 shows the signature of a Ynot implementation of the stack ADT. The signature is expressed in Coq’s ML-like module system. Each implementation contains a type family \( t \), where, for any type \( T \), a value of \( t \; T \) represents a stack storing elements of \( T \). The rep component of the interface relates an imperative stack \( s \) to a functional list \( ls \) in a particular state. Thus, rep \( a \; ls \) is a predicate on heaps (hprop) which can be read as “\( a \) represents the list \( ls \)” in the current state. Just as abstraction over the type family \( t \) allows an implementation to choose different data structures to encode the stack, abstraction over the assertion rep allows an implementation to choose different invariants connecting the concrete representation to an idealized model. In Section 2.1, we gave a grammar for our “specification language.” In contrast to most work on separation logic, our real implementation has no such specification language. Rather, we define the type hprop as heap \( \rightarrow \) Prop, so that specifications and invariants are arbitrary predicates over heaps. In Figure 2, we see notations involving emp, asserting that the heap is empty; \( \ldots \), for injecting pure propositions; \( \ast \), for the standard separating conjunction; and \( \text{Exists} \), for standard typed existential quantification. Not shown in this figure is the binary “points-to” operator \( \Rightarrow \). The relative parsing precedences of the operators place \( \Rightarrow \) highest, followed by \( * \) and \( \exists \). We see an argument \( \text{ls} \) library, standing for computational irrelevance. The syntax type \([\text{T}]\) assertions and may not be used to allow an irrelevant variable’s value to leak into the computational part of a program. Unpacking has no “logical” meaning; it is only used to satisfy the type-checker which has no “logical” meaning; it is only used to satisfy the type-checker of Coq’s syntax extension facilities. Our library defines hprop-valued functions implementing these usual separation logic connectives, but users can define their own “connectives” just as easily. For example, here is how we define \( \exists \): \[ \text{Definition hprop_ex (T : Type) (p : T \rightarrow \text{hprop}) :=} \] \[ \text{fun h : \text{heap} \Rightarrow \exists \text{v : T, p v h}.} \] Here is how we add a syntax extension (or “macro”) that lets us write existential quantification in the way seen in Figure 2: \[ \text{Notation "\exists v \mathrel{:} T , p" := (hprop_ex T (fun v : T \Rightarrow p)).} \] By reading the types of the methods exposed in the \( \text{STACK} \) signature, we can determine the contract that each method adheres to. The \( \text{Cmd} \) type family is our parameterized monad of computations with separation logic specifications; the two arguments to \( \text{Cmd} \) give preconditions and postconditions. \( \text{Cmd} \) is defined in terms of the more primitive \( \text{ST} \) parameterized monad, in the same way as in our past work (Nanevski et al. 2008). Our specifications follow the algebraic approach to proofs about data abstraction (as in Liskov and Zilles 1975), where an abstract notion of state is related to concrete states. Each operation needs a proof that it preserves the relation properly. In Ynot developments, abstract states are manipulated by standard, purely-functional Coq programs, and method specifications include explicit calls to these state transformation functions. Each post-condition requires that the new concrete, imperative state be related to the abstract state obtained by transforming the initial abstract state. The type of the new operation tells us that it expects an empty heap on input, and on output the heap contains just whatever mappings are needed to satisfy the representation invariant between the function return value and the empty list. The free operation takes a stack \( \text{s} \) as an argument, and it expects the heap to satisfy \( \text{rep} \) on \( \text{s} \) and the empty list. The post state shows that all heap values associated with \( \text{s} \) are freed. The specification for push says that it expects any valid stack as input and modifies the heap so that the same stack that stood for some list \( \text{l} \) beforehand now stands for the list \( x :: l \), where \( x \) is the appropriate function argument. We see an argument \( \text{l} \) with type \([\text{T}] \) type \( \text{T} \). The brackets are a notation defined by the Ynot library, standing for computational irrelevance. The syntax \([\text{T}]\) expands to \text{inhabited T}. To review our discussion from Section 2.1.1, this means that the type-checker should enforce that the value of \text{l} is not needed to execute the function. Rather, such values may only be used in stating specifications and discharging proof obligations. We use Coq’s notation scope mechanism to overload brackets for writing irrelevant types and lifted pure propositions. For an assertion \( P \) that mentions the irrelevant variable \( v \), the notation \( v \not\! \not\!\not \ \mathrel{\Rightarrow} \) \( P \) must be used to unpack \( v \) explicitly. The type of the unpack operation is such that it may only be applied to assertions and may not be used to allow an irrelevant variable’s value to leak into the computational part of a program. Unpacking has no “logical” meaning; it is only used to satisfy the type-checker in the absence of native support for irrelevance. The notation is defined by this equation, where we write \( \mathsf{\text{"{v'/v}\}}\) informally to denote the substitution of variable \( v' \) for variable \( v \) in a Coq term. \[ v \not\! \not\!\not \ \mathrel{\Rightarrow} \ P = (\mathsf{\exists v', v = \text{inhabits v'}, v' \not\! \not\!\not \ \mathrel{\Rightarrow} P[v'/v])}\] The derived monad is called “\text{STsep}” in that past work. Ltac tac := sep fail auto. We will explain each of the two parameters to sep as we find a use for it. We implement each stack method by stating its type as a proof search goal, using tactics to realize the goal step by step. The first method to implement is new, and we do so using the syntax New for the new( ) operation from Figure 1. Definition new : Cmd (s : stack) (x : T) (ls : list T) : (fun s : stack => rep s nil). refine (New None)); tac. A simple two-step proof script should suffice. We first use the refine tactic to provide a template for the implementation. The template may have holes in it, and each hole is added as a subgoal. We chain our tac tactic with the semicolon operator, so that tac is applied to each subgoal generated from a hole. Here, we see no proof holes to be filled in, but some are nonetheless there, hidden by the notation {[...]}, which we define in a Coq source file in our library, using Coq’s syntax extension mechanism: Notation "{{ x }}" := (SepWeaken _ (SepStrengthen _ x _ _) _). This rule requests that every use of the double braces be expanded using the template on the second line, leaving four holes to be filled. The SepWeaken and SepStrengthen functions are for weakening post-conditions and strengthening pre-conditions, and the four holes, in order, are to be filled by a new post-condition, a new pre-condition, a proof that the new pre-condition implies the old, and a proof that the old post-condition implies the new. In the case of the new method, the new specifications are determined by standard type inference, while the two proofs must be added as new goals. With the proof script we have used so far, one proof goal remains in the definition of new and is shown to the user: v --> None ==> rep v nil The syntax ==> is for implication between heap assertions, and it has lower parsing precedence than any of the other operators that we use. We see that it is important to unfold the definition of the representation predicate, so we modify our tactic definition, and now the proof completes automatically. Ltac tac := unfold rep; sep fail auto. The definitions of free and push are not much more complicated. We use some new notations, including a Haskell-inspired monadic bind syntax, and all are defined in our library with “Coq macros,” as in the example of double braces above. Definition free (s : stack) : Cmd (rep s nil) (fun _ : unit => emp). refine (fun s => {{Free s}})); tac. Qed. Definition push (s : stack) (x : T) (ls : [list T]) : (fun s x ls : list T => hd <- !s; Free hd;; s ::= next nd;) (new (Node x hd);) ({{Return (Some (data nd))}})); tac. Several unproved subgoals are returned, this one among them, containing a unification variable ?1960: s --> Some hd0 * listRep x (Some hd0) ==> hd0 --> ?1960 * hd0 --> ?1960 We can tell that something has probably gone wrong, since the conclusion of the implication contains an unsatisfiable separation formula that mentions the same pointer twice. Our automated separation simplification is quite aggressive and often simplifies satisfiable formulas to unsatisfiable forms, but the results of this process tend to provide hints about which facts would have been useful. In this case, we see a use of listRep where the pointer is known to be non-null. We can prove a lemma that would help simplify such formulas. Theorem listRep_Some: forall (ls : list T) (hd : ptr), listRep ls (Some hd) == Exists h : @ T, Exists t : @ list T, Exists p : @ option ptr, [ls = h :: t] * hd --> Node h p * listRep t p. destruct ls; sep fail ltac:(try discriminate). Qed. We prove that a functional list related to a non-null pointer decomposes in the expected way. All it takes is for us to request a case analysis on the variable ls, followed by a call to the separation solver. Here we put to use the second parameter to sep, which gives a tactic to try applying throughout proof search. The discriminate tactic solves goals whose premises include inconsistent equalities over values of datatypes, like nil = x :: ls; and adding try in front prevents discriminate from signaling an error if no such equality exists. We can modify our tac tactic to take listRep_Some into account. First, we define another procedure for simplifying an implication. Ltac simp_prem := simpl_IfNull; simp_prem ltac:(apply listRep_Some). Our tactic first calls a simplification procedure associated with the IfNull syntax extension. Next, our procedure calls a tactic simp_prem from the Ynot library, for simplifying premises of implications. The argument to simp_prem gives a procedure to attempt on each premise, until no further progress can be made. We can redefine tac to use simp_prem, by passing that new procedure as the first argument to sep. That first argument is used by sep to simplify a goal before beginning the main proof search. let pop s = sepBind (sepStrengthen (sepRead s)) (fun hd -> match hd with | Some v -> sepBind (sepStrengthen (sepRead v)) (fun nd -> sepSeq (sepStrengthen (sepFrame (sepFree v))) (sepSeq (sepStrengthen (sepFrame (sepWrite s (next nd)))) (sepWeaken (sepStrengthen (sepFrame (sepReturn (Some (data nd)))))))) | None -> sepWeaken (sepStrengthen (sepFrame (sepReturn None)))) | We also suggest to sep that try discriminate may be useful throughout proof search. Ltac tac := unfold rep; sep simp_prem ltac:(try discriminate). When we rerun the definition of pop, we have made progress. Only one goal remains to prove: emp ==> [x = nil] We see that this goal probably has to do with a case where we know the list being modeled is nil. We were successful at using simp_prem to deal with the case where we know the list is non-nil, and we can continue with that strategy by proving another lemma. Theorem listRep_None : forall ls : list T, listRep ls None ==> [ls = nil]. destruct ls; sep fail idtac. Qed. Now our verification of pop completes, after we modify the definition of simp_prem: Ltac simp_prem := simpl_TfNull; simpl_prem ltac:(apply listRep_None || apply listRep_Some). We complete the implementation of the stack ADT with a trivial definition of the type family t, relying on the representation invariant to ensure proper use. Definition t (_ : Set) := stack. For our modest efforts, we can now extract an executable OCaml version of our module. Figure 3 shows part of the result of running Coq’s automatic extraction command on our Stack module. In the implementation of pop, we see invocations of functions whose names begin with sep. These come from the Ynot library, and we must provide their OCaml implementations. Any Ynot program that returns a type T may be represented in unit -> T in OCaml, regardless of the specification appearing in the original Coq type. This makes it easy to implement the basic functions, in the spirit of how the Haskell IO monad is implemented. We see calls to explicit weakening, strengthening, and framing rules in the extracted code. In OCaml, these can be implemented as no-ops and erased by an optimizer. Notice that all specification variables and proofs are eliminated automatically by the Coq extractor. With the erasure of weakening and related operations, we arrive at exactly the kind of monadic code that is standard fare for Haskell, such that the compilation techniques developed for Haskell can be put to immediate use in creating an efficient compilation pipeline for Ynot. It is also worth pointing out that the sort of tactic construction effort demonstrated here is generally per data structure, not per program. We can verify a wide variety of other list-manipulating programs using the same tactic that we developed here. Usually, the tactic work for a new data structure centers on identifying the kind of unfolding lemmas that we proved above. 2.3 Verifying Imperative Queues It is not much harder to implement and verify a queue structure. We define an alternate list representation, parameterized by head and tail pointers. Fixpoint listRep (ls : list T) (hd tl : ptr) {struct ls} : hprop := match ls with | nil => [hd = tl] | h :: t => Exists p : @ option ptr, hd --> \exists x : @ T, [ls = ls' ++ x :: nil] * listRep t p tl end. Record queue : Set := Queue { front : @ option ptr; back : @ option ptr }. Definition rep' (ls : list T) (fr ba : option ptr) := match fr, ba with | None, None => [ls = nil] | Some fr, Some ba => Exists ls' : @ list T, front q --> fr * back q --> ba * rep' ls fr ba | None => [False] end. Definition rep (q : queue) (ls : list T) := Exists fr : @ option ptr, Exists ba : @ option ptr, front q --> fr * back q --> ba * rep' ls fr ba. For this representation, we prove similar unfolding lemmas to those we proved for stacks, with comparable effort. We also need a new lemma for unfolding a queue from the back. Lemma rep'_back : forall (ls : list T) (fr ba : ptr), rep' ls (Some fr) ba ==> Exists nd : @ node, fr --> nd * listRep t p tl end. Lemma rep'_back : forall (ls : list T) (fr ba : ptr), rep' ls (Some fr) ba ==> Exists nd : @ node, fr --> nd * listRep t p tl end. The proof of the lemma relies on some lemmas about pure functional lists. With those available, we prove rep'_back in under 20 lines. When we plug this and the two other unfolding lemmas into the sep procedure, we arrive at quite a robust proof procedure for separation assertions about lists that may be modified at either end. Again, in our final queue implementation, every proof obligation is proved by a tactic built from sep. We write under 10 lines of new tactic hints to be applied during proof search, and we must prove one key lemma by induction. We discover the importance of this lemma while trying to verify an implementation of enqueueing. **Definition enqueue**: forall (q : queue) (x : T) (ls : [list T]), Cmd (ls "" rep q ls) (fun _ : unit => ls "" rep q (ls ++ x :: nil)). refine (fun q x ls => ba <- !back q; nd <- New (Node x None); back q ::= Some nd); IfNull ba Then {(front q ::= Some nd)} Else ban <- !ba; ba ::= Node (data ban) (Some nd); {(Return tt)}); tac. Coq returns a single unproved goal: \[ \text{listRep } v2 \ v4 \ p \ p \rightarrow \text{Node } v3 \ (\text{Some } nd) \\ \rightarrow \text{listRep } (v2 \ v3 :: nil) \ v4 \ nd \] Considering this goal, we see that the problem is that it can only be proved by induction. In general, we must be explicit about induction everywhere we need it, so we need to prove a lemma about this case. The lemma itself is quite easy to automate, when we add one hint from the Ynot library about the commutativity of separating conjunction. **Lemma push_listRep**: \[ \forall (q : \text{queue}) (x : T) (ls : \text{[list } T\text{]}), \text{Cmd} (ls \ "" \text{rep } q \ ls) \\ (\text{fun } _ : \text{unit } \Rightarrow \text{ls } "" \text{rep } q \ (\text{ls } ++ x :: \text{nil})). \] refine (fun q x ls => ba <- !back q; nd <- New (Node x None); back q ::= Some nd); IfNull ba Then {(front q ::= Some nd)} Else ban <- !ba; ba ::= Node (data ban) (Some nd); {(Return tt)}); tac. To get the original verification to go through, we only need to add this lemma to the hint database, using a built-in Coq command. **Hint Immediate push_listRep.** **2.4 Loops** Like with many semi-automated verification systems, we require annotations that are equivalent to loop invariants. Since Coq’s programming language is functional, it is more natural to write loops as recursive functions, and the loop invariants become the pre- and post-conditions of these functions. We support general recursion with a primitive fixpoint operator in the base program logic, and it is easy to build a separation logic version on top of that. We can also build multiple-argument recursive and mutually-recursive function forms on top of the single-argument form, without needing to introduce new primitive combinators. An example is a `getElements` function, defined in terms of the list invariant that we wrote for the stack example. This operation returns the functional equivalent of an imperative list. The task is not so trivial as it may look at first, because the computational irrelevancy of the function’s second argument prohibits its use to influence the return value. This means that we are not allowed to name the irrelevant argument as one that decreases on each recursive call, which prevents us from using Coq’s native recursive function definitions, where every function must be proved to terminate using simple syntactic criteria. Nonetheless, the definition is easy using the general recursion combinators supported by Ynot. **Definition getElements** (hd : option ptr) (ls : [list A]) : \[ \text{add this lemma to the hint database, using a built-in Coq command.} \] **Figure 4.** Signature of a memoization module \[ \text{Lemma push_listRep : forall } (ba : \text{ptr}) (x : T) \\ (\text{nd : ptr}) (ls : \text{list } T) (fr : \text{ptr}), \\ ba ::= \text{Node } x \ (\text{Some } nd) \ \ast \ \text{listRep } ls \ fr \ ba \\ \rightarrow \text{listRep } (ls \ \ast \ x :: \text{nil}) \ fr \ nd. \] **Hint Resolve himp_comm_prem.** **induction ls; tac. Qed.** **2.5 A Dependently-Typed Memoizing Function** As far as we have been able to determine, all previous tools for data structure verification lack either aggressive automation or support for higher-order features. The original Ynot supported easy integration of higher-order functions and dependent types, but the very manual proof style became even more onerous for such uses. Our reimplemented Ynot maintains the original’s higher-order features, and our proof automation integrates very naturally with them. This is a defining advantage of our new framework over all alternatives. For instance, it is easy to define a module supporting memoization of imperative functions. Figure 4 gives the signature of our implementation, which is actually an ML-style functor that produces implementations of this signature when passed appropriate input modules. The type \( T \) is the domain of memoizable functions, and types like \( t \) serve as memo tables. The argument \( inv \) gives an invariant that the memoized function maintains, and the pure assertion \( post \) gives a relation between inputs and outputs of the function. The \( rep \) predicate captures the heap invariants associated with a memo table. The \( create \) function produces a memo table when passed an imperative function with the proper specification. Finally, the \( funcOf \) function maps a memo table to a function that consults the table to avoid recomputation. We can implement a \( MEMO \) functor in 50 lines when we use a memo table that only caches the most recent input-output pair. Like in the previous examples, we build a specialized memoization procedure with a one-line instantiation of library tactics. We give a 7-line definition of \( rep \), give one one-liner proof of a lemma to use in proof search, and include two lines of annotations within the definition of \( funcOf \). All of the rest of the development is no longer or more complicated than in ML. Compared to ML, we have the great benefit of using types to control the behavior of functions to be memoized. A function could easily thwart an ML memoizer by producing unexpected computational effects. 3. Tactic Support The examples from the last two sections show how much of the gory details of proofs can be hidden from programmers. In actuality, every command triggers the addition of one or more proof obligations that cannot be discharged effectively by any of the built-in Coq automation tactics. Not only is it hard to prove the obligations, but it is also hard to infer the right intermediate specifications. Our separation logic formulas range well outside the propositional fragment that automated tools tend to handle; specification inference and proving must deal with higher-order features. Here is an example of the proof obligations generated for the code we gave earlier for the stack push method. Numbers prefixed with question marks are unification variables, whose values the \( sep \) tactic must infer. \[ \text{ls} '' \text{rep s} \text{ls} \Rightarrow \\ \exists v : \text{option ptr}, s \Rightarrow v \Rightarrow ?200 v \\ \forall v : \text{option ptr}, s \Rightarrow v \Rightarrow ?200 v \Rightarrow ?192 v \\ ?192 \text{hd} \Rightarrow ?217 \ast \text{emp} \\ \forall v : \text{ptr}, ?217 \ast v \Rightarrow \text{Node x} \text{hd} \Rightarrow ?206 v \\ ?206 \text{nd} \Rightarrow ?234 \ast (\exists v' : ?231, s \Rightarrow v') \\ ?234 \ast s \Rightarrow \text{Some nd} \Rightarrow \text{ls} '' \text{rep s (x : : ls)} \] We can see that each goal, compared to the previous goals, has at most one new unification variable standing for a specification; of the two new variables appearing in the second last line, one stands for a type, which will be easy to infer by standard unification, once the values of prior variables are known. Also, each new specification variable has its value determined by the value of the new variable from the previous goal. This is no accident; we designed our combinators and notations to have this property. The effective range of specifications is too large to be solvable by any particular “magic bullet” tactic. Nonetheless, we have found that, in practice, a specific parameterized proof strategy can discharge most obligations. In contrast to the situation with classical verification tools that are backed by automated first-order theorem provers, when any proof strategy fails in Coq, the user can always program his own new strategy or even move to mostly-manual proof. However, our experience suggests to us that most goals about data structures can be solved by the procedure that we present in this section. That procedure is implemented as the \( sep \) tactic that we used in our examples. We do not have space to include the literal Coq code implementing it; we will outline the basic procedure instead. The implementation is in Coq’s Ltac language (Delahaye 2000), a domain-specific, dynamically-typed language for writing proof-generating proof search procedures. All of the proof scripts we have seen so far are really Ltac programs. The full language includes recursive function definitions, which, along with pattern-matching on proof goals, makes it possible to code a wide variety of proof manipulation procedures. As our examples have illustrated, \( sep \) takes two arguments, which we will call \( unfolder \) and \( solver \). The task of \( unfolder \) is to simplify goals before specification inference, usually by unfolding definitions of recursive predicates, based on known facts about their arguments. The task of \( solver \) is to solve all of the goals that remain after generic separation logic reasoning is applied. Coq comes with the standard tactic \( tauto \), for proving propositional tautologies. There is a more general version of \( tauto \), called \( intuition \), which will apply a user-supplied tactic to finish off sub-proofs, while taking responsibility for handling propositional structure on its own. The \( intuition \) tactic also exhibits the helpful behavior of leaving for the user any subgoals that it could not establish. \( sep \) is meant to be an analogue of \( intuition \) for separation logic. We also want it to handle easy instantiation of existential quantifiers, since they appear so often in our specifications. We can divide the operation of \( sep \) into five main phases. We will sketch the workings of each phase separately. 3.1 Simple Constraint Solving It is trivial to determine the proper value for any unification variable appearing alone on one side of the implication. For instance, given the goal \[ p \Rightarrow x \ast q \Rightarrow y \Rightarrow ?123 \] we simply set \( ?123 \) to \( p \Rightarrow x \ast q \Rightarrow y \). Given the slightly more complicated goal \[ p \Rightarrow x \ast q \Rightarrow y \Rightarrow ?123 x \] we abstract over \( x \) in the premise to produce \( \text{fun } x' \Rightarrow p \Rightarrow x' \ast q \Rightarrow y \). 3.2 Intermediate Constraint Solving When the trivial unification rules are not sufficient, we need to do more work. We introduce names for all existential quantifiers and computationally-irrelevant variables in the premise. For instance, starting with \[ m '' \exists v : \text{option T}, p \Rightarrow v \Rightarrow \text{rep m} v \\ \Rightarrow ?123 \ast \exists x : \text{option T}, p \Rightarrow x \] we introduce names to simplify the premise, leading to this goal: \[ p \Rightarrow v' \ast \text{rep m'} v' \\ \Rightarrow ?123 \ast \exists x : \text{option T}, p \Rightarrow x \] Now we run the user’s \( unfolder \) tactic, which might simplify some use of a definition. Let us assume that no such simplification occurs for this example. We notice that the points-to fact on the right mentions the same pointer as a fact in the left, so these two facts may be unified, implying $v = v'$. Canceling this known information, we are left with \[ \text{rep } m' \iff v' \Rightarrow ?123 \] which is resolvable almost trivially. We cannot give $?123$ a value that mentions the variables $m'$ and $v'$, since we introduced them with elimination rules within our proof. These variables are not in scope at the point in the original program where the specification must be inserted. Instead, we remember how each local variable was introduced and re-quantify at the end, like this: \[ m \leftarrow \text{Exists } v : @ T, \text{rep } m \iff v \Rightarrow ?123 \] Now the trivial unification is valid. The crucial part of this process was the matching of the two points-two facts. We have special-case rules for matching conclusion facts under quantifiers, for conclusions that match the pre-conditions of the read, write, and free rules. Beyond that, we apply cancellation of identical terms on the two sides of the implication, when those terms do not fall under the scopes of quantifiers. These simple rules seem to serve well in practice. ### 3.3 Premise Simplification After specification inference, the next step is to simplify the premise of the implication. Any \texttt{emp} in the premise may be removed, and any lifted pure formula $[\phi]$ may be removed from the implication and added instead to the normal proof context. We also remove existential quantifiers and irrelevant variable unpinnings in the same way as in the previous phase. ### 3.4 Conclusion Simplification The main \texttt{sep} loop is focused on dealing with parts of the conclusion. We remove occurrences of \texttt{emp}, and we remove any pure formula $[\phi]$ that the user’s \texttt{solver} tactic is able to prove. An existential formula $\text{Exists } x : @ T, P(x)$ in the conclusion is replaced by $P(\?756)$, for a fresh unification variable $\?756$. When no more of these rules apply, we look for a pair of unifiable subformulas on the sides of the implication. All such pairs are unified and crossed out. This may determine the value of a variable introduced for an existential quantifier. For instance, say we begin with this goal. \[ [m < 17] * p \rightarrow m \quad \Rightarrow \quad \text{Exists } x : @ nat, p \rightarrow x \times [x < 42] \] Premise simplification would move the initial impure fact into the normal proof context, leaving us with this. \[ p \rightarrow m \Rightarrow \text{Exists } x : @ nat, p \rightarrow x \times [x < 42] \] Conclusion simplification would introduce a name for the existentially-bound variable. \[ p \rightarrow m \Rightarrow p \rightarrow ?789 \times [?789 < 42] \] Next, conclusion simplification would match the two $p$ points-to facts, since their pointers unify trivially. \[ \text{emp} \Rightarrow [m < 42] \] This goal can be reduced to \texttt{emp} $\Rightarrow$ \texttt{emp} by using the normal proof context to deduce the fact inside the brackets. ### 3.5 Standard Coq Automation When \texttt{sep} has run out of rules to apply, the remaining subgoal is subjected to standard Coq automation. Propositional structure and calls to recursive functions are simplified where possible. \texttt{sep} ends by running a loop over those simplifications and the simplifications performed by the user’s \texttt{solver} tactic, until no further progress can be made. Finally, \texttt{sep} discharges all goals of the form $P \Rightarrow P$, by reflexivity. Every step of the overall process is implemented in Ltac, so that only a bug in Coq would allow \texttt{sep} to declare an untrue goal as true, no matter which customization the programmer provides. By construction, every step builds an explicit proof term, which can be validated afterward with an independent checker that is relatively simple, compared to the operation of all of the decision procedures that may have contributed to the proof. ## 4. Evaluation We have used our environment to implement and verify several data structures, including the Stack and Queue examples that appeared in Section 2. We also follow the evaluation of our prior Ynot system in implementing a generic signature of imperative finite maps. We built three very different implementations: a trivial implementation based on pointers to heap-allocated functional association lists, an implementation based on binary search trees, and an implementation based on hash tables. Any of the implementations can be used interchangeably via ML-style functors, and their shared signature is phrased in terms of dependently-typed maps, where the type of data associated with a key is calculated from an arbitrary Coq function over that key. Our largest example, a packrat PEG parser (Ford 2004), uses these finite maps to cache intermediate results. We also verified one more exotic data structure: binomial trees, which are tree structures with a non-trivial rule for determining how many pointers are stored at each node. This data structure is often applied in implementing priority queues. Our implementation is interesting in its use of a dependently-typed recursive function to characterize functional models of such trees. Finally, we chose representative examples from two competing data structure verification systems, Smallfoot (Berdine et al. 2005) and Jahob (Zee et al. 2008), and reimplemented those examples in our new Ynot. Figure 5 presents code size statistics for our case studies. “Program” code is code that is preserved by extraction. “Specs” are the pre- and post-conditions of every function defined in the module. The core of a Ynot module consists of heap representation “rep” code (e.g., the definitions named \texttt{rep} in our examples), along with proofs (e.g., \texttt{push_listRep} and tactics (e.g., \texttt{simp_prem}) dealing with these representations. The annotations column counts the number of lines of programmer-specified annotations (e.g., $\Phi$). The total overhead column sums proofs, tactics, and annotations. We also present type-checking and proving times (in minutes and seconds), as measured on a 2.8 GHz Pentium D with 1 GB of memory. So far, we have not optimized our tactics for running time; they are executed by direct interpretation of programs in a dynamically-typed language. Our previous version of Ynot placed a significant interactive proof burden on the programmer. The previous Ynot hash table, for instance, required around 320 explicit Coq tactic invocations. Each tactic invocation (indicated by a terminating “;” in Coq) represents a manual intervention by the Ynot programmer. These invocations tended to be low-level steps, like choosing which branch of a disjunction to prove. As such, these proofs are brittle in the face of minor changes. In some previous Ynot developments, the ratio of manual proof to program text is over 10 to 1. For comparison, a large scale compiler certification effort (Leroy 2006) has reported a proof-to-code ratio of roughly 6 to 1. In contrast, our new hash table requires only about 70 explicit tactic invocations. These invocations tend to be high level steps, like performing induction or invoking the \texttt{sep} tactic. We have observed that such tactic-based proofs are significantly easier to maintain. We also made rough comparisons against two verification systems that do not support reasoning about first-class functions. The Jahob (Zee et al. 2008) system allows the specification and verification of recursive, linked data structures in a fragment of Java. We implemented an association list data structure that is included as an example in the Jahob distribution. Size-wise, the two implementations are quite similar. For instance, they both require around twenty lines of heap representation code, and they both require a dozen lines of code for the lookup function’s loop invariant. Our Ynot implementation uses explicit framing conditions in places where Jahob does not, but we speculate that we can probably remove these annotations with additional, custom automation. Our second comparison is against the Smallfoot (Berdiene et al. 2005) system, which does completely automated verification of memory safety via separation logic. We implemented Ynot versions of 10 linked list segment functions included with the Smallfoot distribution. In each case, the Ynot and Smallfoot versions differed by no more than a few lines of annotation burden. 5. Related Work Considering the two automated systems that we just mentioned, Smallfoot uses a very limited propositional logic, and Jahob uses an undecidable higher-order logic. Many interesting program specifications cannot be written in Smallfoot’s logic and cannot be proved to hold by Jahob’s automated prover. Neither of these systems supports higher-order programs, and neither supports custom-programmed proof procedures, for cases where standard automation is insufficient. The ESC/Java (Flanagan et al. 2002) and Spec# (Barnett et al. 2004) systems tackle some related problems within the classical verification framework. These systems have strictly less support for modeling data structures than Jahob has, so that it is impractical to use them to perform full verifications of many data structures. A number of systems have been proposed recently to support dependently-typed programming in a setting oriented more towards traditional software development than Coq is. Agda (Norell 2007) and Epigram (McBride and McKinna 2004) are designed to increase the convenience of programming in type theory over what Coq provides, but, out of the box, these systems support neither imperative programming nor custom proof automation. ATS (Chen and Xi 2005) includes novel means for dealing with imperative state, but it includes no proof automation beyond decision procedures for simple base theories like linear arithmetic. This makes it much harder to write verified data structure implementations than in Ynot. Concoqtion (Pasalic et al. 2007) allows the use of Coq for reasoning about segments of general OCaml programs. While those programs may use imperative, the Coq reasoning is restricted to pure index terms. Sage (Gronski et al. 2006) supports hybrid type-checking, where typing invariants may be specified with boolean-valued program functions and checked at runtime. This approach generally does not enable full static correctness verification. Partly as a way to support imperative programming in type theory, Swierstra and Altenkirch (2007) have studied pure functional semantics for effectful programming language features, with embeddings in Haskell and Agda. Charguéraud and Pottier (2008) have demonstrated a translation from a calculus of capabilities to a pure functional language. In each case, the authors stated plans to do traditional interactive verification on the pure functional models that they generate. Since such verification is generally done in logics without general recursion, these translations cannot be used to verify general recursive programs without introducing an extra syntactic layer, in contrast to Ynot. Each other approach also introduces restrictions on the shape of the heap, such as the absence of stored impure functions in the case of Swierstra and Altenkirch’s work. Other computer proof assistants are based around pure functional programming languages, with opportunities for encoding and verifying imperative programs. Nonetheless, we see the elegance of our approach as depending on the confluence of a number of features not found in other mature proof assistants. ACL2 (Kaufmann and Moore 1997) does not support higher-order logic or higher-order functional programming. Bulwahn et al. (2008) describe a system for encoding and verifying impure monadic programs in Isabelle/HOL. Their implementation does not support storing functions in the heap. They suggest several avenues for loosening this restriction, and the approaches that support heap storage of impure functions involve restricting attention to functions that are constructive or continuous (properties that hold of all Coq functions), necessitating some extra proof burden or syntactic encoding. There is closely related work in the field of shape analysis. The TVLA system (Sagiv et al. 2002) models heap shapes with a first-order logic with a built-in transitive closure operation. With the right choices of predicates that may appear in inferred specifications, TVLA is able to verify automatically many programs that involve both heap shape reasoning and reasoning in particular decidable theories such as arithmetic. The Xisa system (Chang and Rival 2008) uses an approach similar to ours, as Xisa is based on user specification of inductive characterizations of shape invariants. Xisa builds this inductive definition mechanism into its framework, while we inherit a more general mechanism from Coq. Xisa is based on hardcoded algorithms for analyzing inductive definitions and determining when and how they should be unfolded. Such heuristics lack theoretical guarantees about how broadly they apply. In the design of our system, we recognize this barrier and allow users to extend the generic solver with custom rules for dealing with custom inductive predicates. In comparing the new Ynot environment to the above systems and all others that we are aware of, there are a number of common advantages. No other system supports both highly-automated... proofs based on separation logic (when they work) and highly human-guided proofs (when they are needed), let alone combinations of the two. None of the systems with significant automation support the combination of imperative and higher-order features, like we handle in the example of our higher-order memoizer and iterators. We also find no automated systems that deal with dependent types in programs. The first of these advantages seems critical in the verification of imperative programs that would be difficult to prove correct even if refactored to be purely functional. For instance, it seems plausible that our environment could be used eventually to build a verified compiler that uses imperative data structures for efficient dataflow analysis, unification in type inference, and so on. None of the purely-automated tools that we have surveyed could be applied to that purpose without drastic redesign. We are not aware of any previous toolkit for manual proof about imperative programs in proof assistants that would make the task manageable; the manual reasoning about state would overwhelm “the interesting parts” of compiler verification. 6. Conclusions & Future Work The latest Ynot source distribution, including examples, can be downloaded from the project web site: http://ynot.cs.harvard.edu/ Concurrency is a big area for future work on Ynot. Systems like Smallfoot (Berdine et al. 2005) do automated separation-logic reasoning about memory safety of concurrent programs. We would like to extend that work to full correctness verification, by designing a monadic version of concurrent separation logic that fits well within Coq. The full potential of the Ynot approach also depends on explicit handling of other computational effects, such as exceptions and input-output. Our prior prototype handled the former, and ongoing work considers supporting the latter. As with any project in automated theorem proving, there is always room for improvements to automation and inference. A future version of Ynot could benefit greatly in usability by incorporating abstract interpretation to infer specifications, as several automated separation logic tools already do. Nonetheless, our current system already fills a crucial niche in the space of verification tools. We have presented the first tool that performs well empirically in allowing mixes of manual and highly-automated reasoning about heap-allocated data structures, as well as the first tool to provide aggressive automation in proofs of higher-order, imperative programs. We hope that this will form a significant step towards full functional verification of imperative programs with deep correctness theorems. References
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/4686803/Chlipala_etal09.pdf;sequence=1", "len_cl100k_base": 15465, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43884, "total-output-tokens": 17740, "length": "2e13", "weborganizer": {"__label__adult": 0.0003638267517089844, "__label__art_design": 0.0002827644348144531, "__label__crime_law": 0.00031638145446777344, "__label__education_jobs": 0.0004620552062988281, "__label__entertainment": 5.704164505004883e-05, "__label__fashion_beauty": 0.00014901161193847656, "__label__finance_business": 0.00015914440155029297, "__label__food_dining": 0.00038743019104003906, "__label__games": 0.0005388259887695312, "__label__hardware": 0.0005793571472167969, "__label__health": 0.00048661231994628906, "__label__history": 0.00019657611846923828, "__label__home_hobbies": 8.45789909362793e-05, "__label__industrial": 0.0003268718719482422, "__label__literature": 0.0002713203430175781, "__label__politics": 0.00026679039001464844, "__label__religion": 0.0005068778991699219, "__label__science_tech": 0.00968170166015625, "__label__social_life": 8.577108383178711e-05, "__label__software": 0.003675460815429687, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0003190040588378906, "__label__transportation": 0.0005469322204589844, "__label__travel": 0.00019681453704833984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72316, 0.01193]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72316, 0.22764]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72316, 0.87912]], "google_gemma-3-12b-it_contains_pii": [[0, 1200, false], [1200, 6542, null], [6542, 13925, null], [13925, 19917, null], [19917, 26936, null], [26936, 31496, null], [31496, 36378, null], [36378, 41618, null], [41618, 45595, null], [45595, 52841, null], [52841, 60140, null], [60140, 66366, null], [66366, 72316, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1200, true], [1200, 6542, null], [6542, 13925, null], [13925, 19917, null], [19917, 26936, null], [26936, 31496, null], [31496, 36378, null], [36378, 41618, null], [41618, 45595, null], [45595, 52841, null], [52841, 60140, null], [60140, 66366, null], [66366, 72316, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72316, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72316, null]], "pdf_page_numbers": [[0, 1200, 1], [1200, 6542, 2], [6542, 13925, 3], [13925, 19917, 4], [19917, 26936, 5], [26936, 31496, 6], [31496, 36378, 7], [36378, 41618, 8], [41618, 45595, 9], [45595, 52841, 10], [52841, 60140, 11], [60140, 66366, 12], [66366, 72316, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72316, 0.0137]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
36a88f521e7938eaeb937f7251239dde67a0a2db
Threads and Input/Output in the Synthesis Kernel Henry Massalin Calton Pu Department of Computer Science Columbia University, New York, NY 10027 calton@cs.columbia.edu November 3, 1995 Abstract The Synthesis operating system kernel combines several techniques to provide high performance, including kernel code synthesis, fine-grain scheduling, and optimistic synchronization. Kernel code synthesis reduces the execution path for frequently used kernel calls. Optimistic synchronization increases concurrency within the kernel. Their combination results in significant performance improvement over traditional operating system implementations. Using hardware and software emulating a SUN 3/160 running SUNOS, Synthesis achieves several times to several dozen times speedup for UNIX kernel calls and context switch times of 21 microseconds or faster. 1 Introduction Synthesis is an operating system kernel for a parallel and distributed computational environment. We have three major goals in the design and implementation of Synthesis: 1. high performance, 2. self-tuning capability to dynamic load and configuration changes, 3. a simple, uniform and intuitive model of computation with a high-level interface. In this paper, we focus on the aspects of the Synthesis kernel implementation that supports threads and input/output. To achieve very high performance, we combine kernel code synthesis [5], which decreases kernel call overhead through specialization, and reduced synchronization, which decreases kernel thread synchronization overhead. We have introduced the principles of code synthesis [5], which makes the Synthesis kernel fast for several reasons. First, frequently executed Synthesis kernel calls are “compiled” and optimized at run-time using ideas similar to currying and constant folding. For example, when we open a file for input, a custom-made (thus short and fast) read routine is returned for later read calls. Second, frequently traversed data structures are sprinkled with a few machine instructions to make them self-traversing. For example, the CPU dispatching, including context-switches, is done by the ready queue this way (for details see Figure 3). In this paper, we describe the synergy from combining code synthesis with the other kernel implementation techniques. To make the paper self-contained, we summarize kernel code synthesis and other aspects of background information in Section 2. In traditional OS’s, the kernel call and dispatching/scheduling overhead overshadows the kernel synchronization cost. Therefore, we see traditional kernels using powerful mutual exclusion mechanisms such as semaphores. However, in Synthesis we have used kernel code synthesis to trim kernel calls and context switches. The next bottleneck turned out to be kernel internal synchronization cost, given that the Synthesis kernel is highly parallel. Our answer to this problem consists of methods that reduce synchronization in the Synthesis kernel, described in Section 3. To illustrate the new possibilities for performance improvements introduced by these techniques, we will describe two kinds of objects supported by the Synthesis kernel, threads and I/O. Our discussions on threads in Section 4 and I/O in Section 5 are relevant to uniprocessor and multiprocessor systems. The distribution aspects of Synthesis are beyond the scope of this paper. All the performance improvement techniques follow from one software engineering principle, called the principle of frugality, which says that we should use the least powerful solution to a given problem. Since we carefully separate the kernel implementation from the interface specification, the principle of frugality has been applied throughout the system. Both kernel code synthesis and reduced synchronization are good examples. In Section 6 we present measurement data to show the effectiveness of these techniques. 2 Synthesis Background 2.1 Synthesis Model of Computation The Synthesis model of computation is conceptually a von Neumann machine with threads of execution, memory protection boundaries, and I/O devices. To support parallel and distributed computing, the threads of execution form a directed graph, in which the nodes are threads and the arcs are data flow channels. This graph model and other support for parallel and distributed computation will be described in more detail in another paper [4]. Synthesis threads are threads of execution, like UNIX processes. Some threads never execute user-level code, but run entirely within the kernel to provide additional concurrency for some kernel operations. Threads execute programs in a quaspace (quasi address space), which also store data. Finally, I/O devices move data between threads, including files and messages. On one physical node, all the Synthesis quasspaces are subspaces of one single address space, defined by the CPU architecture (e.g., with a 32-bit microprocessor we have a 32-bit address space). The kernel blanks out the part of the address space that each quaspace is not supposed to see. Since they are parts of the same address space, it is easy to share memory between quasspaces by setting their address mappings. The current implementation of the kernel does not support virtual memory. ### 2.2 Kernel Code Synthesis The idea of kernel code synthesis has been introduced in a previous paper [5]. In Synthesis, we have a code synthesizer in the kernel to generate specialized (thus short and fast) kernel routines for specific situations. We have three methods to synthesize code. The Factorizing Invariants method bypasses redundant computations, much like constant folding. The Collapsing Layers method eliminates unnecessary procedure calls and context switches, both vertically for layered modules and horizontally for pipelined threads. The Executable Data Structures method shortens data structure traversal time when the data structure is always traversed the same way. ### 2.3 Basic Kernel Components To describe the Synthesis kernel implementation in concrete terms, we first summarize its basic components. The Synthesis kernel can be divided into a number of collections of procedures and data. We call these collections of procedures quajaets that encapsulate hardware resources, like Hydra objects [7]. For this paper the most important quajaets are threads and I/O device servers. Threads are an abstraction of the CPU. The device servers are abstractions of I/O devices. Except for the threads, quajaets consist only of procedures and data. Events such as interrupts start the threads that animate the quajaets and do work. The quajaets do not support inheritance or any other language features. Most quajaets are implemented by combining a small number of building blocks. Some of the building blocks are well known, such as monitors, queues, and schedulers. The others are simple but somewhat unusual: switches, pumps and gauges. As we shall see in Section 5, all of Synthesis I/O is implemented with these building blocks. The quajet intercier (see below) uses optimization techniques such as Collapsing Layers to combine these building blocks into kernel quajaets. The unusual building blocks require some explanation. A switch is equivalent to the C switch statement. For example, switches direct interrupts to the appropriate service routines. A pump contains a thread that actively copies its input into its output. Pumps connect passive producers with passive consumers. A gauge counts events (e.g., procedure calls, data arrival, interrupts). Schedulers use gauges to collect data for scheduling decisions. Each building block may have several implementations. Applying the principle of frugality, we use the most economical implementation depending on the usage. For example, there are several kinds of queues in the Synthesis kernel. Semantically, we have the usual two kinds of queues, the synchronous queue which blocks at queue full or queue empty, and the asynchronous queue which signals at those conditions. For each kind, we have two implementations: dedicated queues and optimistic queues. Dedicated queues use the knowledge that only one producer (or consumer) is using the queue and omit the synchronization code. Optimistic queues accept queue insert and queue delete operations from multiple producers and multiple consumers. The optimistic queue is described in detail in Section 3.2. Quajects such as threads are created by the quaject creator, which contains three stages: allocation, factorization, and optimization. The allocation stage allocates memory for the quaject and all associated synthesized procedures. The factorization stage uses Factoring Invariants to substitute constants into the quaject’s code templates. The optimization stage then improves the final code with specialized peephole optimizations. The quaject interfaicer starts the execution of existing quajects by installing them in the invoking thread. The quaject interfaicer has four stages: combination, factorization, optimization, and dynamic link. The combination stage finds the appropriate connecting mechanism (queue, monitor, pump, or a simple procedure call). Factorization and optimization (the same as quaject creator) clean up the connecting code. Then the dynamic link stage stores the synthesized code’s entry points into the quajects. 3 Reduced Synchronization 3.1 Overview We have three methods to reduce synchronization cost: Code Isolation, Procedure Chaining, and Optimistic Synchronization. Each method shortens the execution path in a somewhat different way. Informally speaking, Code Isolation and Procedure Chaining can be thought of as synchronization avoidance techniques. If absolutely unavoidable we use Optimistic Synchronization. Code Isolation uses kernel code synthesis to separate and isolate fragments of data structure manipulation programs. The separation eliminates unnecessary synchronization if each fragment operates on its own piece of data. For example, each thread has a Thread Table Entry (TTE, equivalent to the `proctable` in UNIX). Naïve procedures that traverse the Thread Table to modify a TTE would have to lock the table. However, in Synthesis each thread updates its own TTE exclusively. Therefore, we can synthesize short code to manipulate the TTE without synchronization. Procedure Chaining avoids synchronization by serializing the execution of conflicting threads. Instead of allowing concurrent execution that would have complicated synchronization problems, we chain the new procedure to be executed to the end of the currently running procedure. For example, the currently executing thread handles interrupts in Synthesis. A signal arriving in the middle of interrupt handling is potentially dangerous: the `kill` signal may terminate the interrupt handling prematurely. Therefore, we chain the procedure invoked by the signal to the end of the interrupt handler. Procedure Chaining is implemented efficiently by simply changing the return addresses on the stack. Optimistic Synchronization assumes that interference between threads is rare, so we should shorten the normal non-interfering case. The idea of optimistic validation is to go ahead and make the changes, without locking anything. A check at the end of the update tests whether the assumption of non-interference remains true. If the test fails, we rollback the changes and retry. Using Optimistic Synchronization we have implemented an optimistic queue, which we describe now. ### 3.2 Optimistic Queues The queue manipulation example for optimistic synchronization is important because most of the Synthesis kernel data structures are queues. Also, some of the control structures, such as chained interrupt and signal handlers, are implemented as queues of pointers to the routines. In other words, once we can synchronize queue operations without locking, most of the Synthesis kernel will run without locking. Although all the queues have the usual put-item (`Q.put`) and get-item (`Q.get`) operations, we classify them according to their operations environment. We have four kinds of queues: single-producer and single-consumer (SP-SC), multiple-producer and single-consumer (MP-SC), single-producer and multiple-consumer (SP-MC), multiple-producer and multiple-consumer (MP-MC). The simplest case, SP-SC (figure 1), gives the basic idea of all four queues: when the queue buffer is neither full nor empty, the consumer and the producer operate on different parts of the buffer. Therefore, synchronization is necessary only when the buffer becomes empty or full. The synchronization primitives are the usual primitives, say busy wait or blocking wait. To argue the correctness of these queues, we need to show that these queues do not lose any items being put in or generate any items that has already been taken out. To avoid lost updates in the SP-SC queue, we use a variant of Code Isolation. Of the two variables being written, `Q.head` is updated only by next(x): if(x == Q_size-1) return 0; else return x+1; Q_get(data): t = Q_tail; if (t == Q_head) wait; data = Q_buf[t]; Q_tail = next(t); Q_put(data): h = Q_head; if (next(h) == Q_tail) wait; Q_buf[h] = data; Q_head = next(h); Figure 1: SP-SC Queue the producer and Q_tail only by the consumer. To avoid taking out an item repeatedly, we update Q_head at the last instruction during Q_put. Therefore, the consumer will not detect an item until the producer has finished. The difference between the SP-SC queue and the MP-SC queue reduces to a single compare-and-swap instruction at the end plus the retry loop, to ensure the synchronization of multiple producers. (Larger critical sections may require more sophisticated synchronization.) A more interesting queue (shown in Figure 2) implements atomic inserts of many items (up to the size of the queue). Now we have two problems to consider: the multiple producer synchronization, solved by the compare-and-swap, and the atomic insert of multiple items, which we explain now. To minimize the synchronization among the producers, each of them increments atomically the Q_head pointer by the number of items to be inserted, "staking a claim" to its space in the queue. The producer then proceeds to fill the space, at the same time as other producers are filling theirs. But now the consumer may not trust Q_head as a reliable indication that there is data in the queue. We fix this with a separate array of flag bits, one for each queue element. As the producers fill each queue element, they also set a flag in the associated array indicating to the consumer that the data item is valid. The consumer clears an item’s flag as it is taken out of the queue. To give an idea of relative costs, the current implementation of MP-SC has a normal execution path length of 11 instructions (on the MC68020 processor) through Q_put. In the case where two threads are trying to write an item to a sufficiently empty queue, they will either both succeed (if they attempt to increment Q_head at different times), or one of them will succeed as the other fails. The thread that succeeds consumes 11 instructions. The failing thread goes once around the retry loop for a total of 20 instructions. AddWrap(x,n): x ← n; if(x >= Q.size) x ← Q.size return x; SpaceLeft(h): t = Q.tail; if(h >= t) return t-h+Q.size; else return t-h-1; Q_put(data,H): do { h = Q.head; hi = AddWrap(h,H); } while(Spaceleft(h) > H && cas(Q.head,h,hi) == FAIL); for(i=0; i<H; i++) { Q.buf[ AddWrap(h,i) ] = data[i]; Q_flag[ AddWrap(h,i) ] = 1; } NOTE: cas(v,old,new) [compare-and-swap] performs the following operation atomically: If(v == old) v = new; return OK; else return FAIL; Figure 2: MP-SC Queue [Multiple Insert] 4 Threads 4.1 Synthesis Threads Synthesis threads are light-weight processes. Each Synthesis thread (called simply “thread” from now on) executes in a context, defined by the TTE. The thread state is completely described by its TTE (see figure 3) containing: the register save area; the vector table, which points to four kinds of procedures (thread-specific system calls, interrupt handlers, error traps and signal vectors); the address map tables; and the context-switch-in and context-switch-out procedures. Kernel code generated for a thread goes into a protected area to avoid user tampering. The kernel procedure bodies that make up part of the thread are: - the signal, start, stop, step and destroy thread calls; - the customized I/O system calls, synthesized by open (see Section 5); - the synthesized interrupt handlers, such as for queue buffering (see Section 5.4); - the specialized error trap handlers and the signal-me procedures (see Section 4.3). Figure 3: Thread Context When a Synthesis thread makes a kernel call, we say that the thread is executing in the kernel mode; this is in contrast to having a kernel server process run the kernel call on the behalf of the client thread. The trap instruction switches the thread into the supervisor state and makes the kernel quaspace accessible in addition to the user quaspace. Consequently, the kernel call may move data between the user quaspace and the kernel quaspace. Since the other quaspace are outside the kernel quaspace, were the thread to attempt access to an illegal address, the thread will take a bus-fault exception, even in the kernel mode. If a thread is not running, it is waiting. A waiting thread is blocked for some event or resource. Each resource has its own waiting queue. For example, a thread waiting for CPU is sitting in the ready queue; when the thread blocks for characters from a tty driver, it is chained to the tty driver queue. Spreading the waiting threads makes blocking and unblocking faster. Since we have eliminated the general blocked queue, we do not have to traverse it for insertion at blocking or to search it for deletion at unblocking. A waiting thread’s unblocking procedure is chained to the end of the interrupt handling, so each waiting queue has reduced synchronization due to Code Isolation. 4.2 Context Switches Context switches are expensive in traditional systems like UNIX because they always do the work of a complete switch: save the registers in a system area, setup the C run-time stack, find the current proc-table and copy the registers into proc-table, start the next process, among other complications (summarized from source code [1]). A Synthesis context-switch is shorter for two reasons. First, we switch only the part of the context being used, not all of it. Second, we use executable data structures to minimize the critical path. In two instances we can optimize context switch by moving data only when they are used. The first is the handling of floating point registers and the second is the MMU address space switch. Most of Synthesis threads do not use the floating point co-processor. If we were to save all the floating point co-processor information at each context switch, the hundred-plus bytes of information takes about 10 microseconds to save to memory, which is comparable to the 11 microseconds needed to do an entire context switch without the floating point (see Section 6.3 for more data). Since most threads will not use the floating point co-processor, we generate the default context switch code without it. When the thread executes its first floating point instruction, an illegal instruction trap happens. Then the Synthesis kernel resynthesizes the context switch procedures to include the floating point co-processor. This way, only users of the floating point co-processor will pay for the added overhead. There is no “dispatcher” procedure in Synthesis. Figure 3 shows that the ready-to-run threads (waiting for CPU) are chained in an executable circular queue. A jmp instruction in each context-switch-out procedure of the preced- ing thread points to the context-switch-in procedure of the following thread. Assume thread-0 is currently running. When its time quantum expires, the interrupt is vectored to thread-0’s context-switch-out procedure (sw_out). This procedure saves the CPU registers into thread-0’s register save area (TTO.reg). The jmp instruction then directs control flow to one of two entry points of the next thread’s (thread-1) context-switch-in procedure, sw_in or sw_in.mmu. Control flows to sw_in.mmu when a change of address space is required, otherwise control flows to sw_in. The context-switch-in procedure then loads the CPU’s vector base register with the address of thread-1’s vector table, restores the CPU general registers, and resumes execution of thread-1. The context switch takes about 11 microseconds (see Table 4). This is a striking example of what can be achieved with optimization through synthesized code. 4.3 Thread Operations As a quantum, the thread supports several operations. Some of these operations are invoked by the hardware; the error trap handlers and the interrupt handlers fall into this category. Some of the operations are invoked by other threads; these are signal, start, stop, and step. We will introduce briefly these operations here and describe interrupt handling in Section 5.3. In Synthesis (as in many other systems), a signal is an asynchronous software interrupt sent by a thread (or interrupt handler) to another thread. A synthesized signal system call (the signal-me procedure) in the receiving thread calls the signal handler procedure. To run the signal handler in user mode and user quaspace, the signal system call alters the general registers area of the receiving thread’s TTE to make the receiving thread call the signal handler when activated. Thread control and debugger support is implemented with three synthesized system calls: stop, start, and step. The stop system call suspends execution of a thread by removing the thread’s TTE from the ready queue. The start system call puts the TTE back when invoked. The step system call causes a stopped thread to execute a single machine instruction and then stop again. The debugger runs as an asynchronous thread that shares the quaspace being debugged. An error trap is a synchronous hardware interrupt generated by illegal operations such as referencing non-existent memory or dividing by zero. Like other hardware interrupts, error trap handlers run in kernel mode. Unlike other hardware interrupts, error traps are synchronous since they occur immediately after each illegal operation. Each thread may have its own error trap handlers. To allow arbitrarily complex error handling in user mode, we send an error signal to the interrupted thread itself. The error signal handler then runs in user mode (as described above). To send this error signal, the error trap handler copies the kernel stack frame onto the user stack, modifies the return address on the kernel stack to the user error signal procedure, and executes a return from exception which takes the thread into the user error signal procedure. Synthesized for each thread at creation time, these error trap handlers consume about 5 ma- chine instructions, supporting efficient emulation of unimplemented kernel calls or machine instructions. The UNIX emulator used for performance measurement is implemented with traps. 4.4 Scheduling Currently, the Synthesis scheduling policy is round-robin with an adaptively adjusted CPU quantum per thread. Instead of priorities, Synthesis uses fine- grain scheduling, which assigns larger or smaller quanta to threads based on a “need to execute” criterion. A detailed explanation on fine-grain scheduling is beyond the scope of this paper; the idea and its implementation in Synthesis are described in detail in another paper [3]. Here, we only give a brief informal summary. In our directed graph model of computation (Section 2.1), a thread’s “need to execute” is determined by the rate at which I/O data flows into and out of its quaspace. Since CPU time consumed by the thread is an increasing function of the data flow, the faster the I/O rate the faster a thread needs to run. Therefore, our scheduling algorithm assigns a larger CPU quantum to the thread. This kind of scheduling must have a fine granularity since the CPU requirements for a given I/O rate and the I/O rate itself may change quickly, requiring the scheduling policy to adapt to the changes. Effective CPU time received by a thread is determined by the quantum assigned to that thread divided by the sum of quanta assigned to all threads. Priorities can be simulated and preferential treatment can be given to certain threads in two ways: we may raise a thread’s CPU quantum, and we may reorder the ready queue when threads block and unblock. As an event unblocks a thread, its TTE is placed at the front of the ready queue, giving it immediate access to the CPU. This way we minimize response time to events. To minimize time spent context switching, CPU quanta are adjusted to be as large as possible while maintaining the fine granularity. A typical quantum is on the order of a few hundred microseconds. 5 Input/Output In Synthesis, I/O means more than device drivers. I/O includes all data flow among hardware devices and quasspaces. Data move along logical channels we call streams, which connect the source and the destination of data flow. The details of the stream model of I/O will be described in a separate paper [4]. Here we describe how the streams are implemented using the building blocks described in Section 2.3. 5.1 I/O Device Servers Physical I/O devices are encapsulated in quads called device servers. Typically, the device server interface supports the usual I/O operations such as `read` and `write`. In general, `write` denotes data flow in the same direction of control flow (from caller to callee), and `read` denotes data flow in the opposite direction of control flow (from callee back to caller). Each device server may have its own threads or not. A polling I/O server would run continuously on its own thread. An interrupt-driven server would block after its initialization. The server without threads wakes up when its physical device generates an interrupt. High-level servers may be composed of more basic servers. At boot time, the kernel creates the servers for the raw physical devices. A simple example pipelines the output of a raw server into a filter. Concretely, the Synthesis equivalent of UNIX cooked `tty` driver is a filter that processes the output from the raw `tty` server and interprets the erase and kill control characters. This filter reads characters from the raw keyboard server through a dedicated queue. To send characters to the screen, however, the filter writes to an optimistic queue, since output can come from both a user program or the echoing of input characters. The default file system server is composed of several filter stages. Connected to the disk hardware we have a raw disk device server. The next stage in the pipeline is the disk scheduler, which contains the disk request queue, followed by the default file system cache manager, which contains the queue of data transfer buffers. Directly connected to the cache manager we have the synthesized code to read the currently open files. The other file systems that share the same physical disk unit would connect to the disk scheduler through a monitor and switch. The disk scheduler then will redirect the data flow to the appropriate stream. With synthesized code, this pipeline has a very low overhead, shown by the measurements in Section 6. 5.2 Producer/Consumer The implementation of the stream model of I/O in Synthesis can be summarized using the well-known producer/consumer paradigm. Each stream has a control flow that directs its data flow. There are three cases of producer/consumer relationships, which we shall consider in turn. Perhaps the simplest case is an active producer and a passive consumer (or vice-versa). This case, called active-passive, has simple implementations. When there is only one producer and one consumer (single-single), a procedure call does the job. If there are multiple producers or consumers (multiple-single), we attach a monitor to the end with multiple participants to serialize their access. But the normal producer/consumer problem has both an active producer and an active consumer. This case, called active-active, requires a queue to mediate the two. For the single-case, an SP-SC queue suffices. For the multiple-case, we may attach a monitor to the multiple end, resulting in MP-SC or MP-MC queues. Each queue may be synchronous (blocking) or asynchronous (using signals) depending on the situation. The last case is a passive producer and a passive consumer. One example is the `xclock` program that has the clock producer ready to provide a reading at any time and a display consumer that accepts new pixels to be painted on the screen. In these cases, we use a `pump` queue that reads (the clock time) from the producer and writes the information (new pixels) to the consumer. This works for multiple passive producers and consumers as well. In summary, we have an efficient implementation for each case of the producer/consumer problem. Since the stream model of I/O can be easily described as a composition of producers and consumers through the three building blocks (switches, monitors, and queues), we have shown the generality of the Synthesis implementation. In practice, composing a new device server with these building blocks is straightforward. ### 5.3 Interrupt Handling At the heart of an I/O device server is the interrupt handler. Interrupt processing combines some elements of procedure calls and others of context switches. Like a procedure call, an interrupt pushes the currently executing stack and the return address. When the interrupt handling finishes, the execution resumes from the interrupted instruction in the current thread. Like a context switch, an interrupt is unexpected and unrelated to the current thread. Furthermore, the interrupt handler temporarily changes the program counter and some general registers of the current thread, without receiving any arguments from or returning any results to the current thread. Synthesis interrupt handling differs from some traditional OS's (such as UNIX) in that each thread in Synthesis synthesizes its own interrupt handling routine, as well as system calls. These customized interrupt handlers and system calls may run much faster than general-purpose equivalents. Two examples of synthesized interrupt handlers are the timer interrupt to context switch out the current thread (Section 4.2) and the analog to digital (A/D) buffered queue (Section 5.4). One way to increase the concurrency in the kernel is to push the bulk of interrupt processing (e.g., a character arrives at `/dev/tty`, to be inserted into the queue) into a separate thread which is created by the interrupt handler. However, in most cases the separate thread is uneconomical, since normal interrupts require very little processing. For the simple cases the interrupt handler could run under the currently executing thread to avoid context switch. We only have to take care to save and restore the few registers that the interrupt handler will use. During the (short) interrupt processing, higher level interrupts may happen and as long as the interrupt handling is simple, the scenario repeats until eventually the highest level interrupt processing completes and returns to the next level. Ultimately the entire stack of interrupts is handled. Even though the thread running the simple interrupt handler can take care of recursive interrupts, signals may cause synchronization problems. We have two choices to handle a signal in the middle of an interrupt: either we create a new thread to finish the interrupt, or we delay the processing of the signal. Delaying the signal costs less, since it bypasses the creation of a new thread, and it does not degrade system performance significantly, since the current interrupt handling should be quick. We use Procedure Chaining to delay the signal, linking the signal processing routine to the end of interrupt handler. Each Synthesis thread has its own vector table, which points to routines ser- vicing hardware interrupts, error traps, and system calls. Although in principle each thread may have a completely different set of interrupt handlers, currently the majority of them are shared by all threads. System calls, however, are fre- quently customized for each thread. In particular, I/O operations such as read and write are synthesized by the open operation. As new quajects are opened (such as files, devices, threads, and others), the thread’s system call vectors are changed to point to the synthesized procedures. At its creation, a thread’s vector table is filled with a default set of system calls and error vectors that help debugging. 5.4 Optimizations At boot time, the kernel uses Collapsing Layers to optimize the device servers. For example, instead of communicating to the raw tty through a pipe (as in the conceptual model) the cooked tty makes a procedure call to the raw tty to get the next character. This transforms a combination of active-passive producer/consumer pair into a procedure call. Down the pipeline, the cooked tty actively reads and the tty device itself actively writes, forming an active- active pair connected by an SP-SC optimistic queue. Another optimization is the buffered queue. Usually, queue operations are cheap (a dozen instructions) compared to the processing time of each element in the queue. However, in the kernel we have cases of data movement that do very little work for each queue operation, thus the queue operations become the main overhead. Buffered queues use kernel code synthesis to generate several specialized queue insert operations (a couple of instructions); each moves a chunk of data into a different area of the same queue element. This way, the overhead of a queue insert is amortized by the blocking factor. For example, the A/D device server handles 44,100 (single word) interrupts per second by packing eight 32-bit words per queue element (hardware described in Section 6.1). ### Table 1: Measured UNIX System Calls (in seconds) <table> <thead> <tr> <th>No</th> <th>Descr.</th> <th>====Raw Sun Data====</th> <th>Sun</th> <th>Synthesis</th> <th>Synthesis</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>user</td> <td>sys</td> <td>tot</td> <td>watch</td> </tr> <tr> <td>1</td> <td>Compute</td> <td>19.8</td> <td>0.5</td> <td>20</td> <td>20.9</td> </tr> <tr> <td>2</td> <td>R/W pipes 1</td> <td>0.4</td> <td>9.6</td> <td>10</td> <td>10.2</td> </tr> <tr> <td>3</td> <td>R/W pipes 1024</td> <td>0.5</td> <td>14.6</td> <td>15</td> <td>15.3</td> </tr> <tr> <td>4</td> <td>R/W pipes 4GB</td> <td>0.7</td> <td>37.2</td> <td>38</td> <td>38.2</td> </tr> <tr> <td>5</td> <td>R/W file</td> <td>0.5</td> <td>20.1</td> <td>21</td> <td>23.4</td> </tr> <tr> <td>6</td> <td>open null/close</td> <td>0.5</td> <td>17.3</td> <td>17</td> <td>17.4</td> </tr> <tr> <td>7</td> <td>open tty/close</td> <td>0.5</td> <td>42.1</td> <td>43</td> <td>43.1</td> </tr> </tbody> </table> #### 6 Measurements ##### 6.1 Environment The current implementation of Synthesis runs on an experimental machine (called the Quamachine), which is similar to a SUN-3: a Motorola 68020 CPU, 2.5 MB no-wait state main memory, 390 MB hard disk, 3½ inch floppy drive. In addition, it has some unusual I/O devices: two-channel 16-bit output, two-channel 16-bit analog input, a compact disc player interface, and a 2Kx2Kx8-bit framebuffer with graphics co-processor. The Quamachine is designed and instrumented to aid systems research. Measurement facilities include an instruction counter, a memory reference counter, hardware program tracing, and a microsecond-resolution interval timer. The CPU can operate at any clock speed from 1 MHz up to 50 MHz. Normally we run the Quamachine at 50 MHz. By setting the CPU speed to 16 MHz and introducing 1 wait-state into the memory access, the Quamachine can closely emulate the performance of a SUN-3/160. We also have written a UNIX emulator running on top of the Synthesis kernel, which is capable of servicing SUNOS kernel calls. In the simplest case, the emulator translates the UNIX kernel call into an equivalent Synthesis kernel call. Otherwise, multiple Synthesis primitives are combined to emulate a UNIX call. With both hardware and software emulation, we run the same object code on equivalent hardware to achieve a fair comparison between Synthesis and SUNOS. All benchmark programs were compiled on the SUN 3/160, using cc -O under SUNOS release 3.5. The executable a.out was timed on the SUN, then brought over to the Quamachine and executed under the UNIX emulator. To validate our emulation, the first benchmark program is a compute-bound test of similarity between the two machines. This test program implements a function producing a chaotic sequence [2]. It touches a large array at non-contiguous points, which ensures that we are not just measuring the "in-the-cache" perfor- 6.2 Comparing Synthesis with SUNOS The purpose of making the Synthesis hardware and software emulate the SUN 3/160 is to compare Synthesis with SUNOS kernel calls. Since the executables are the same, the comparison is direct. In table 1 we summarize and compare the results of the measurements. The columns under “Raw SUN data” were obtained with the time command and also with a stopwatch. The SUN was unloaded during these measurements, as time reported more than 99% CPU available for them. The Synthesis emulator data were obtained by using the microsecond-resolution real-time clock on the Quamachine, rounded to hundredths of a second. These times were also verified with stopwatch (sometimes running each test 10 times to obtain a more easily measured time interval). The source code for the programs numbered 1 to 7 are included in appendix A. Program 1 is the computation-intensive calibration function to validate the hardware emulation. The calibration program shows the Synthesis emulator to be roughly 5% slower than a SUN 3/160. Recently we learned that the SUN 3/160 runs actually at 16.7 MHz, which is about 5% faster than 16 MHz. Programs 2, 3, and 4 write and then read back data from a pipe in chunks of 1, 1K and 4K bytes. They show a remarkable speed advantage (56 times) for single-byte read/write operations. This is due to a combination of synthesized kernel calls, which are very short, and fine-grain scheduling, which reduces the average queue operation costs. When the chunk grows to page size, the difference is still very significant (4 to 6 times). The generated code loads long words from one quaspace into registers and stores them back in the other quaspace. With unrolled loops this achieves the data transfer rate of about 8MB per second. Program 5 reads and writes a file (cached in main memory) in chunks of 1K bytes. This is the same program used in an earlier measurement of Synthesis [5] and shows some improvement in the current implementation of Synthesis. We include the programs 6 and 7, which open/close /dev/null and /dev/tty, to show that Synthesis kernel code generation is very efficient. The open and close operations synthesize code for later read and write, yet they are 20 to 40 times faster than the UNIX open without code generation. Although this Synthesis file system is entirely memory-resident, the 10000 loops must have kept all the data pages in the SUNOS memory buffers, minimizing this difference. Table 2 contains more details of file system operations that are discussed in the next section. 6.3 Synthesis Kernel Calls To obtain direct timings of Synthesis kernel call times (in microseconds), we use the Synthesis kernel monitor execution trace, which records in memory the instructions executed by the current thread. Using this trace, we can calculate the <table> <thead> <tr> <th>operation</th> <th>native time (usec)</th> <th>Unix emulation (usec)</th> </tr> </thead> <tbody> <tr> <td>emulation trap overhead</td> <td>-</td> <td>2</td> </tr> <tr> <td>open (/dev/null)</td> <td>43</td> <td>49</td> </tr> <tr> <td>open (/dev/tty)</td> <td>62</td> <td>68</td> </tr> <tr> <td>open (file)</td> <td>73</td> <td>85</td> </tr> <tr> <td>close</td> <td>18</td> <td>22</td> </tr> <tr> <td>read 1 char from file</td> <td>9</td> <td>10</td> </tr> <tr> <td>read N chars from file</td> <td>9*N/8</td> <td>10*N/8</td> </tr> <tr> <td>read N from /dev/null</td> <td>6</td> <td>8</td> </tr> </tbody> </table> (*) Data already in kernel queues or buffer cache. Table 2: File and Device I/O (in microseconds) <table> <thead> <tr> <th>operation</th> <th>time (usec)</th> </tr> </thead> <tbody> <tr> <td>create</td> <td>142</td> </tr> <tr> <td>destroy</td> <td>11</td> </tr> <tr> <td>stop</td> <td>8</td> </tr> <tr> <td>start</td> <td>8</td> </tr> <tr> <td>step</td> <td>37</td> </tr> <tr> <td>signal</td> <td>8 (thread to thread)</td> </tr> </tbody> </table> Table 3: Thread Operations (in microseconds) exact kernel call times by counting the memory references and each instruction execution time. Tables 2 to 5 show the timings calculated for SUN emulation mode. (When running full speed at 50 MHz, the actual performance is about three times faster.) In table 2 we have more file and device I/O operations. These operations are the native Synthesis file and device calls. A comparison of the native mode and the emulator mode shows the cost of UNIX emulation in Synthesis. Worth noting in Table 2 is the cost of open. The simplest case, open (/dev/null), takes 49 microseconds, of which about 60% are used to find the file (hashed string names stored backwards) and 40% for code synthesis (read and write null). The additional 19 microseconds in opening /dev/tty come from generating real code to read and write. Finally, opening a file implies synthesizing more sophisticated code and buffer allocations (17 additional microseconds). In table 3, we see that Synthesis threads are light-weight – less than 150 microsecond creation time. Of these, about 100 are needed to fill approximately 1KBytes in the TTE and the rest are used by code synthesis. The short time to start, stop, and step a thread makes it possible to trace and debug threads in a highly interactive way. <table> <thead> <tr> <th>operation</th> <th>time (usec)</th> </tr> </thead> <tbody> <tr> <td>Full context switch</td> <td>11 (**)</td> </tr> <tr> <td>Full context switch</td> <td>21 (with FP registers)</td> </tr> <tr> <td>Partial context switch</td> <td>3</td> </tr> <tr> <td>Block thread</td> <td>4</td> </tr> <tr> <td>Unblock thread</td> <td>4</td> </tr> </tbody> </table> (**) If the thread does not use the Floating Point co-processor. Table 4: Dispatcher/Scheduler (in microseconds) <table> <thead> <tr> <th>operation</th> <th>time (usec)</th> </tr> </thead> <tbody> <tr> <td>Service raw TTY interrupt</td> <td>16</td> </tr> <tr> <td>Service raw A/D interrupt</td> <td>3</td> </tr> <tr> <td>Set alarm</td> <td>9</td> </tr> <tr> <td>Alarm interrupt</td> <td>7</td> </tr> <tr> <td>Chain to a procedure</td> <td>4 (if no re-try)</td> </tr> <tr> <td>Chain to a procedure</td> <td>7 (with 1 re-try)</td> </tr> <tr> <td>Chain (signal) a thread</td> <td>9 (delayed interrupt)</td> </tr> </tbody> </table> Table 5: Interrupt Handling (in microseconds) In table 4 we see the context switch times consumed by the dispatcher. Again we note that these timings are achieved with generated code (executable data structures, in this case). The separation between using and not using the floating point co-processor is to shorten the main critical path (explained in Section 4.2). Table 5 shows some timings for interrupt handling, alarm setting and handling, and signaling. For example, raw tty interrupt handling simply picks up the character. Attentive readers would have noticed that our measurement figures are faster than traditional run-time library routines. For example, naive implementations of memory allocation, block copy, and string comparison would have slowed down our system considerably. In Synthesis, the memory allocation routine is an executable data structure implementing a fast-fit heap [6] with randomized traversal added. The block copy as used in read has been outlined in Section 6.2. The string comparison was mentioned as part of the open earlier in this section. 6.4 Kernel Size The Synthesis kernel is written in 68020 assembly language, which is used as a fast prototyping language. This may sound peculiar, since usually people use high-level programming languages for fast prototyping. However, given the lack of support for efficient dynamic code synthesis in particular and efficient static code generation in general, we were unable to find a suitable compiler. We are actively pursuing the design and implementation of a high-level programming language for the development of the next-generation Synthesis. A rough breakdown of the kernel shows about 3000 lines of code for the raw device drivers (TTY, disk, A/D and D/A, graphics), 1000 lines for the query creator and interacter, 1000 lines for the templates used in code synthesis (e.g., queues, threads, files), 1000 lines for utilities and shared library (e.g. printf), and about 5000 lines for the kernel monitor with high-level debugging and programming tools. The whole kernel assembles to 64KBytes, of which 32KB are the kernel monitor. There is some concern on kernel size when using code generation since many little functions can add up to a lot of memory. However, there are space advantages to code generation. While it is true that a synthesis kernel running several hundred threads each having many open files can use more memory than a UNIX system running a similar load, such heavily loaded systems are not normally seen. On a lightly loaded system, the static kernel size dominates any space allocated dynamically. This is where Synthesis excels. With 3 processes running, the Synthesis kernel occupies only 32K. As more threads are created and more files opened, the space requirements go up. However, the small static space required for the kernel means that you can run Synthesis on small, PC-like computers and embedded industrial controllers, two application areas that are unlikely to have much more than a few tens of threads running simultaneously. On the other hand, if you have a machine that can run 300 jobs concurrently, then you probably have the extra memory space to run them well. 7 Conclusion We expect the techniques described here to be useful to operating system implementors. Specifically, we have used kernel code synthesis, optimistic synchronization, and fine-grain scheduling to increase OS kernel concurrency and efficiency in the implementation of the Synthesis kernel support for threads and input/output. To achieve very high performance in Synthesis, we repeatedly apply the principle of frugality, which says that we should use the simplest abstraction and the cheapest implementation for each case. Given the level of abstraction of Synthesis kernel interface (all references to kernel data structures or algorithms eliminated), we can then use sophisticated algorithms to implement this interface. Although we use many different tricks to speed up the Synthesis kernel, their common theme is the simplification of the normal case, as dictated by the principle of frugality. Kernel code synthesis shortens the normal execution path by binding the system state early; subsequent kernel calls simply jump into the generated rou- times, avoiding the system state traversal repetition. At code generation time, we also apply known compiler optimization techniques, such as constant folding and common sub-expression elimination. This is applied throughout Synthesis, including threads and input/output. Reduced synchronization shortens the critical path by careful set-up and exception handling. For example, we have implemented queue operations using only Optimistic Synchronization. Since almost all of Synthesis kernel data structures are queues, the kernel basically runs without any inter-locking. We expect this to be especially important when we move Synthesis to a multi-processor, as it is designed for. Combining kernel code synthesis and optimistic synchronization we have achieved very high performance compared to mature, commercial systems. For example, using a Unix emulator running on a hardware emulator of SUN 3/160 to run the same binary executable, Synthesis performance (for I/O and threads) is several times or several dozen times better than SUNOS. Since optimistic synchronization is best suited for a multi-processor and fine-grain scheduling for a distributed system, we expect more performance gains when we run Synthesis on those environments. 8 Acknowledgments This work is partially funded by the New York State Center for Advanced Technology on Computer and Information Systems under the grant NYSTF CU-0112580, by the AT&T Foundation under a Special Purpose Grant, and by the National Science Foundation under the grant CDA-88-20754. We gladly acknowledge the hardware parts contributed by AT&T, Hitachi, IBM, and Motorola. Finally, very special thanks go to John Ousterhout, our “shepherd” SOSP program committee member, who helped shape both the style and the content of this paper. References SUNOS release 3.5 source code. Gödel, Escher, Bach: an eternal golden braid. Fine-grain scheduling. In Proceedings of the Workshop on Experience in Building Distributed Sys- tems, Asilomar, California, October 1989. Model of computation in Synthesis. Technical Report CUCS-383-88, Department of Computer Science, Columbia University, In preparation. The Synthesis kernel. Fast fits. Hydra: The kernel of a multiprocessing operating system. A Measurement Programs
{"Source-Url": "https://people.eecs.berkeley.edu/~prabal/resources/osprelim/MP89.pdf", "len_cl100k_base": 10812, "olmocr-version": "0.1.53", "pdf-total-pages": 23, "total-fallback-pages": 0, "total-input-tokens": 109655, "total-output-tokens": 11816, "length": "2e13", "weborganizer": {"__label__adult": 0.0004105567932128906, "__label__art_design": 0.0003147125244140625, "__label__crime_law": 0.00033283233642578125, "__label__education_jobs": 0.0005645751953125, "__label__entertainment": 7.677078247070312e-05, "__label__fashion_beauty": 0.00017762184143066406, "__label__finance_business": 0.0002455711364746094, "__label__food_dining": 0.00039124488830566406, "__label__games": 0.0006971359252929688, "__label__hardware": 0.00487518310546875, "__label__health": 0.0005393028259277344, "__label__history": 0.000335693359375, "__label__home_hobbies": 0.00013244152069091797, "__label__industrial": 0.0007066726684570312, "__label__literature": 0.00022602081298828125, "__label__politics": 0.00030541419982910156, "__label__religion": 0.0006604194641113281, "__label__science_tech": 0.08367919921875, "__label__social_life": 7.617473602294922e-05, "__label__software": 0.0077972412109375, "__label__software_dev": 0.89599609375, "__label__sports_fitness": 0.0003509521484375, "__label__transportation": 0.0008749961853027344, "__label__travel": 0.00023162364959716797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50499, 0.02785]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50499, 0.39025]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50499, 0.91171]], "google_gemma-3-12b-it_contains_pii": [[0, 1222, false], [1222, 1558, null], [1558, 4413, null], [4413, 7178, null], [7178, 10012, null], [10012, 13009, null], [13009, 15288, null], [15288, 16799, null], [16799, 16824, null], [16824, 19934, null], [19934, 22985, null], [22985, 25552, null], [25552, 28440, null], [28440, 31459, null], [31459, 34272, null], [34272, 37091, null], [37091, 39923, null], [39923, 42370, null], [42370, 44780, null], [44780, 47709, null], [47709, 49881, null], [49881, 50477, null], [50477, 50499, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1222, true], [1222, 1558, null], [1558, 4413, null], [4413, 7178, null], [7178, 10012, null], [10012, 13009, null], [13009, 15288, null], [15288, 16799, null], [16799, 16824, null], [16824, 19934, null], [19934, 22985, null], [22985, 25552, null], [25552, 28440, null], [28440, 31459, null], [31459, 34272, null], [34272, 37091, null], [37091, 39923, null], [39923, 42370, null], [42370, 44780, null], [44780, 47709, null], [47709, 49881, null], [49881, 50477, null], [50477, 50499, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50499, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50499, null]], "pdf_page_numbers": [[0, 1222, 1], [1222, 1558, 2], [1558, 4413, 3], [4413, 7178, 4], [7178, 10012, 5], [10012, 13009, 6], [13009, 15288, 7], [15288, 16799, 8], [16799, 16824, 9], [16824, 19934, 10], [19934, 22985, 11], [22985, 25552, 12], [25552, 28440, 13], [28440, 31459, 14], [31459, 34272, 15], [34272, 37091, 16], [37091, 39923, 17], [39923, 42370, 18], [42370, 44780, 19], [44780, 47709, 20], [47709, 49881, 21], [49881, 50477, 22], [50477, 50499, 23]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50499, 0.14286]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
7126c20dab35d926e91bb5aa1420a4f3367a8e57
Supporting developers’ coordination in the IDE Guzzi, Anja; Bacchelli, Alberto; Riche, Yann; van Deursen, Arie DOI 10.1145/2675133.2675177 Publication date 2015 Document Version Accepted author manuscript Published in Citation (APA) https://doi.org/10.1145/2675133.2675177 Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Supporting Developers’ Coordination in The IDE Anja Guzzi; Alberto Bacchelli Delft University of Technology Delft, The Netherlands {a.guzzi, a.bacchelli}@tudelft.nl Yann Riche Microsoft Redmond, WA, USA yannr@microsoft.com Arie van Deursen Delft University of Technology Delft, The Netherlands {arie.vanDeursen}@tudelft.nl ABSTRACT Teamwork in software engineering is time-consuming and problematic. In this paper, we explore how to better support developers’ collaboration in teamwork, focusing on the software implementation phase happening in the integrated development environment (IDE). Conducting a qualitative investigation, we learn that developers’ teamwork needs mostly regard coordination, rather than concurrent work on the same (sub)task, and that developers successfully deal with scenarios considered problematic in literature, but they have problems dealing with breaking changes made by peers on the same project. We derive implications and recommendations. Based on one of the latter, we analyze the current IDE support for receiving code changes, finding that historical information is neither visible nor easily accessible. Consequently, we devise and qualitatively evaluate BELLEVUE, the design of an IDE extension to make received changes always visible and code history accessible in the editor. Author Keywords Developers’ coordination; IDE extension; qualitative study. ACM Classification Keywords D.2.6 Software Engineering: Programming Environments INTRODUCTION Software engineering is often a team effort. It is not unusual for hundreds of professionals to collaborate to design, build, evaluate, and maintain software systems [71]. However, teamwork remains one of the most difficult and pervasive problems of software engineering, and developers face a plethora of teamwork problems at different levels [18]. Key to this teamwork are tools and processes that revolve around it, source code management, and development. Most of developers’ time is spent within the Integrated Development Environments (IDE) [48], thus researchers are trying to leverage them by augmenting their collaborative capabilities (e.g., [17, 27, 32, 33]). Nevertheless, the IDE remains a tool that primarily helps individual programmers to be more effective in the classical edit-compile-run cycle [73]. In this paper, we explore how to better support collaboration in teamwork within the IDE. Our research is set up in two phases: An exploratory investigation, followed by the design and evaluation of a medium fidelity click-through prototype. In our investigation, we explored how developers in large development teams experience collaboration and identify problems they face when working in team. To this end, we (1) conducted a brainstorming session with 8 industry experts working on the development of IDE solutions at Microsoft; (2) identified three core opportunities for the design of improved collaborative solutions, which we refined through semi-structured interviews with developers; and (3) interviewed 11 professional developers with various degrees of experience and seniority, from 9 different companies to contextualize those opportunities. In our investigation, we report how, while our participants reported collaborating with others on code, they spend a limited amount of time actively engaged in collaboration. As a consequence, one of the core issue emerging revolves around managing dependencies between activities, rather than working together contemporarily on the same (sub)task. Our interviews with developers also confirm that issues arise due to the lack of information (i.e., imperfect information) when working on shared code, and uncover existing strategies to prevent or resolve them. Dependencies are often mediated through code, in the form of code changes. Yet, our investigation illustrates how dealing with code changes remains a common source of issues despite existing tools and strategies when working on the same project. This emerges as one of the main causes of frustration in interviewees’ experience of teamwork. From our findings we derive implications and recommendations for improving coordination in modern IDEs. In the second phase of this study, we focus on an investigation of how to improve handling teams code changes from within the IDE. Using common usability heuristics [56], we describe opportunities to better support teamwork in the IDE by supporting information needs about change history. Consequently, we leverage this analysis to: (1) derive requirements for an IDE extension, (2) describe the design of BELLEVUE, a prototype fulfilling these requirements, and (3) iteratively evaluate a design called BELLEVUE with 10 senior developers from different companies and backgrounds. BELLEVUE makes incoming code changes always visible during development, eases the use of that history in the context of the developer’s tasks and flows. It shows historical *Anja Guzzi was an intern with the User Experience Team, Microsoft Developer Division, Microsoft, Redmond, USA in the summer of 2012 when this work was carried out. information within the active code editor to let users modify the code without context switch. To achieve this, Bellevue takes already available historical change data and offers an interactive view that shows detailed historical information for files and code chunks with respect to a previous version. EXPLORATORY INVESTIGATION: METHODOLOGY In this section, we define the scope of our research, and our research methodologies (illustrated in Figure 1), divided into the following main steps: brainstorming, semi-structured in- terviews, and data analysis with card sorting. Scoping We scoped our initial focus by tapping in the collective knowledge of eight industry experts engaged in the design and implementation of development tools (including one of the co-authors, and the first author as researcher intern). To do so, we organized a brainstorming on the challenges and opportunities revolving around team development. The brain- storming led to the identification of the following areas for further investigation: awareness (i.e., knowing other people’s work to anticipate issues and to proactively seek out informa- tion when needed), work dependencies (i.e., anticipating and understanding who I depend on and who depends on me), and breaking changes (i.e., anticipating what and whom will be impacted by the breaking change I am about to release). As we will explore in more depth later, those areas describe times where lack of information can lead to resource- and time-consuming situations. Such situations are common not only in development situations, but in teamwork in general where situation of imperfect information is the norm [38]. Consistently, literature reported that developers often have difficulties facing questions such as: “What have my cowork- ers been doing?” “How have resources I depend on changed?” “What information was relevant to my task?” “What code caused this program state?” [7, 26, 45, 68] In our work, we iterate on this by providing a more in-depth view of some of those specific problems, and by illustrating and val- idating ways of addressing them within the flow of activities developers engage in. Semi-structured Interviews To gather more details about the context in which those is- ues emerge, and current strategies for addressing them, we conducted semi-structured interviews [50] with professional developers. In total, we interviewed 11 developers from a varying in experience, team sizes, and companies. Table 1 summarizes interviewees’ characteristics. We conducted each 90-min interview on the phone, and tran- scribed the interviews for latter analysis. After each inter- view, we analyzed the transcript and split it into smaller co- herent units (i.e., blocks expressing a single concept), which we summarized by using an interview quotation or an ab- stractive sentence. In addition, the interviewer kept notes (i.e., memos) of relevant and recurring observations in a doc- ument iteratively refined and updated. Out of the interviews, 56 memos emerged. <table> <thead> <tr> <th>ID</th> <th>Overall experience</th> <th>In current team</th> <th>Role</th> <th>Team Size</th> </tr> </thead> <tbody> <tr> <td>D1</td> <td>7.5 years</td> <td>4.5 months</td> <td>dev</td> <td>4</td> </tr> <tr> <td>D2</td> <td>10 years</td> <td>6 years</td> <td>dev lead</td> <td>7</td> </tr> <tr> <td>D3</td> <td>2 months</td> <td>2 months</td> <td>junior dev</td> <td>4</td> </tr> <tr> <td>D4</td> <td>1.5 years</td> <td>1.5 years</td> <td>sq1 dev</td> <td>5</td> </tr> <tr> <td>D5</td> <td>20 years</td> <td>10 years</td> <td>senior dev</td> <td>4</td> </tr> <tr> <td>D6</td> <td>25+ years</td> <td>7 years</td> <td>senior dev</td> <td>15</td> </tr> <tr> <td>D7</td> <td>1 years</td> <td>1 years</td> <td>dev</td> <td>5</td> </tr> <tr> <td>D8</td> <td>5 years</td> <td>5 years</td> <td>dev lead</td> <td>5</td> </tr> <tr> <td>D9</td> <td>15 years</td> <td>6 years</td> <td>senior dev</td> <td>11</td> </tr> </tbody> </table> Table 1. Interviewed developers To analyze our data, we created the 562 cards from the trans- scribed coherent units and the memos. Each card included: the source (transcript/memo), the interviewee’s name (if from the transcript), the unit/memo content, and an ID for later ref- ference. We used card sorting [69] to extract salient themes, leveraging a combination of open and closed card sorts. After macro categories were discovered, we re-analyzed their cards to obtain a finer-grained categorization. Finally, we organized the categories using affinity diagramming [52]. EXPLORATORY INVESTIGATION: DATA In this section, we present the results of our exploratory inves- tigation. When quoting raw data, we refer to the individual cards using a [D.X,Y] notation, where X represents the devel- oper, and Y (optional) the card (e.g., [D2.03] refers to card 03 from the interview with D2). Teamwork from the developers’ perspectives According to the interviewees, collaboration in teamwork is defined by a wide spectrum of activities: Communication - Collaboration is communication: As D8 said: “It is all about communication. If you have good communication, you have good collaboration.” Commu- nication can be both one-to-one and one-to-many, can be formal or informal, and goes through different channels. Channels are “conventional,” such as email and instant messaging (IM), but also more domain specific ones, such as interactions via project management tools (e.g., source code management systems and requirement tracking sys- tems) and source code; as D4 explains: “[to communicate] typically we use [IM], but we also have an internal wiki that we use”. [D4.02, D7.09, D8.(01,02)] Helping each other by sharing information - As D9 said: “Collaboration is just sharing information and ideas.” In particular, according to interviewees, collaboration means being proactive and sharing useful information to make it easier for others to complete their work (e.g., “make [the acquisition of information] as easy as possible on the other coworkers, so that they don’t have to struggle” [D7]). This aspect of collaboration involves voluntarily sending notifi- cations (e.g., “FYI—for your information” messages) and starting discussions (e.g., “let’s coordinate on this change **Brainstorming** - 21 areas of investigation **Semi-structured Interviews** - 66 Memos - Interview Transcript **Data Analysis with Card Sorting** - 56 Memos - 506 Units from Transcript **Selected Areas** - 9 Participants **Figure 1. The research method applied in the first phase** I need to make”) [D7]. Resource sharing involves not only knowledge on the actual source code of the project, but also information from external resources, for example about new technologies or coding style; as D9 stated: “We also like send each other things, like, style tips and like interesting articles about how other companies do things”. [D2.02, D7.(02,03,06), D9.(01,04,05)] **Knowing what others know - Collaboration**, from the interviewees’ perspective, also means to stay aware of the experts on the different parts of the system (i.e., ”the domain experts”) and to understand artifacts and code changes done by colleagues. D11 explains: “[collaboration] is keeping track of what everybody is working on and being able to know how the different pieces are in place”. According to interviewees, knowing what the others know has the aim both of preventing problems and of reacting faster when they occur. [D2.01, D4.03, D7.04–07, D11.47] **Working on the same goal, doing different things - Overall**, developers see collaboration as working toward the same goal (e.g., product releases), by acting in parallel on different parts of the same project (e.g., working separately on different code artifacts): “Collaboration is we are all working towards the common goal, but maybe working on different parts of it, but these components do interact” [D7]; “[Collaborating meant] we divided up the work […], we went off these directions, and as things started to merge together, we go on [merging] on a case by case base.” [D3]. [D1.2, D3.01, D4.01, D7.01, D9.(02,06)] **Dealing with imperfect information in teamwork** To investigate how developers deal with imperfect information, we outlined three concrete teamwork scenarios in which the existence of imperfect information can generate problems. The scenarios were derived from teamwork situations commonly described as problematic in literature. **Inefficient Task Assignment** **Scenario.** One developer is assigned to a task, while another is already working on a similar or related task. This introduces inefficiencies in the team (and potential collisions). **Related literature.** Software engineering researchers have recognized task partition and allocation as important endeavors in the context of teamwork [44, 46]. If dependencies and common traits among tasks are not correctly handled, developers can reduce their efficiency and generate unexpected conflicts [37]. Literature suggests different techniques, with varying results, for efficient task assignment (e.g., [16, 24]). In particular, the assignment of bug fixes (or new features to implement), from a repository of issues or requests to the most appropriate developers, is one of the main instances of task assignment investigated by researchers [1]. Researchers reported that bug triaging is a time consuming task, which requires non-trivial information about the system, and that often leads to erroneous choices [2]. A number of techniques has been devised to tackle the triaging problem, e.g., [53, 41]. **Interviews’ outcome.** Although considered realistic, our participants did not see this scenario as a core issue. In fact, the task assignment processes of teams are in general considered effective to prevent the problematic scenario to take place. In some teams supervising figures (e.g., managers) do the task assignment (e.g., “[a new task] goes to a manager, who decides whom to assign” [D8], and “the boss will tell you about a task [to do]” [D9]); in the other teams, tasks are divided during team meetings, with various degrees of developers’ interaction (e.g., “we are using daily SCRUM meetings” [D1], and “we break up the code, and if the [task] is in your code, it’s yours” [D5]). **Simultaneous Conflicting Changes** **Scenario.** Developers find themselves in a situation where there is a merge conflict (i.e., when different people are touching the code at the same time). Related literature. A recent significant effort in the software engineering research community is devoted to detect concurrent modifications to software artifacts (e.g., [35, 36, 59, 65]). In fact, developers making inconsistent changes to the same part of the code can cause merge conflicts when changes are committed to the code repository, which leads to wasted developers’ efforts and project delays [13, 42, 66]. Grinner conducted one of the first field studies that investigated developers’ coordination strategies [29]. She observed that it is sometimes difficult for developers (even for expert ones) to merge conflicting code without communicating with the other person who worked on a module. Later, de Souza et al., in an observation of the coordination practices of a NASA software team, observed that developers in some cases think individually trying to avoid merging, while in others think collectively by holding check-ins and explaining their changes to their team members [21]. Interviews’ outcome. Our participants reported only rarely encountering a situation where more than one person was working on the same file at the same time (“we don’t run into those situations a lot” [D2]). Most of our participants’ teams were organized to make developers work on different parts of the system, with a key person in charge of coordinating development to prevent those issues, typically a lead developer or an architect. Some participants also used technical solutions to avoid concurrent edits (e.g., “We put a lock on [the file], so it does not get edited by others”) [D1]. When a merge conflict happens, our participants reported resolving it quickly and easily (e.g., “The best and quickest solution you have is to undo, we roll back and [fix it],” [D1]; “typically, it is solved really quick” [D2]), and often using merging tools (e.g., “we don’t have to do much manually” [D8]). Although automatic merging was used, our participants also explained that they manually checked each conflict, revealing that it is not entirely trusted. Breaking changes Scenario. A developer/team changed some parts of the code, rendering the work of another developer or team unusable or obsolete. Related literature. Researchers consider breaking changes problematic, not only for developers who receive the change and have to understand and adapt their code, but also for developers who are making a change that might break the functionalities of many clients [3]. Literature shows investigations (e.g., [22]) on the effect of dependencies that break other people’s work, and proposed methods to address subsequent problems, at the scale of both single system (e.g., [10, 72]) and ecosystems (e.g., [32, 61]). Interviews’ outcome. The reaction of our participants was different according to the origin of the breaking changes. When the origin was considered to be external to the team or company or when participants felt they had opportunity for neither intervening nor displaying their disappointment to ‘the breaker’, they accept the breaking changes with no strong negative emotions but rather as inevitable. This happened even when resolving the issue might take long time (e.g., “more than year” [D2]) or when it resulted in large operational or maintenance costs (e.g., “this [break] was costing the company many thousands of dollars per minute.” [D7]). However, when the origin was internal to the company/team, participants reported strong negative emotions (e.g., frustration). This seemed in part due to the mismatch between the expected communication that is made possible by being in the same company/team, and the “waste” of time spent finding the cause of the issue, which might in turn be resolved relatively quickly (e.g., “I spend a couple of hours to find out the error [...] fixed in 5 minutes.” [D3] and “I spent a day fixing the problem I spent three days finding.” [D8]). Generally, breaking changes leading to syntactical errors were not considered an issue, because they could easily be spotted with static analysis tools (e.g. pre-compilers) and fixed. In effect, those particular breaks were considered as a direct consequence of the lack of coordination effort from the person introducing the breaking code [D1]. Some of our participants insisted that breaking changes when the origin of the break is internal to the team or company should be handled more smoothly and proactively. For example, some would prefer stricter rules to avoid internal breaking changes: “people breaking other’s people code [...]. I’d like to see management being more rigorous about it” [D8]. Receiving a code change Managing internal breaking changes is the most problematic scenario that emerged from our analysis, in this section, we analyze how developers deal with changes made by peer developers working on the same project. Our participants reported that they investigated changes in the code-base when they were notified of them (e.g., via automatic emails from versioning systems). However, they mostly did so to discover whether they had an impact on their own code. While they are not interested in building a holistic view of the whole code base, they use this approach to discover whether the changes will impact their own work. In doing so, they first need to assess how relevant the change is to their current or upcoming work to decide whether or not to investigate further. In some rare occasions, developers use this opportunity to explore other’s work not as it impacts theirs, but from a style/aesthetics perspective, looking at coding styles, norms, approaches, and solutions, especially when changes are emitted by a respected colleague (for learning/inspiration) or a novice (for peer reviewing). When our participants reported discovering an error caused by a change made by someone else, their most prominent complaint regarded the lack of coordination that they felt should have accompanied the change (e.g., they would have expected an “heads up” email). However, in the case of clear syntactic errors (e.g., compilation errors, or runtime errors generating a stack trace), participants did not feel the kind of frustration they expressed in the case of semantic errors (e.g., caused by a library that changes some sorting order, therefore impacting the dependent code) or unclear alteration in the behavior. In fact, semantic errors required participants to perform a deeper and more time-consuming analysis to understand their cause [D3.(47,49), D8.(28,29,36)] Once they found the cause, they explained they proceeded to measure the impact on their code by, for example, measuring how many files or lines of code need to be changed (as D8 explained: “I measure the impact of a change [looking at] how many files/lines it affects. A few? Hundreds?”). Usually, the developers receiving the breaking change were those automatically responsible of adapting and fixing their own code. However, when a change had a deep impact on the code-base, and required more information about the change (e.g., the rationale of the change) and the code-base, developers usually wanted to contact the author. Participants also reported that, when the change introduced a defect, those receiving were responsible for deciding to file an issue report against the change to the change author (e.g., defect, those receiving were responsible for deciding to file a report against the change). When a change had a deep impact on the code-base, the developers receiving the breaking change were those automatically responsible of adapting and fixing their own code. However, when a change had a deep impact on the code-base, and required more information about the change (e.g., the rationale of the change) and the code-base, developers usually wanted to contact the author. Some participants mentioned that the lack of testing contributes to faulty changes being committed to the repository (e.g., “we are really bad at testing [...], you pull and you get a file you try to run and it fails” [D9]. “If I’d tested it better, I wouldn’t have put [this code] in the build” [D5]). Nevertheless, they also warned that running all the tests for each change would be too expensive (“all tests, to run them all, it would take 3 weeks. Unfeasible to take 3 weeks for each check in” [D6]). They also warned that testing performed on a setup might unreliable on a different one (“we test and it’s all good, but then they test on their end and it might break [...]. It’s something to do with customizing.” [D2]), and that many semantic changes could not be detected by tests (“even if there are tests that check [most] things, you’d still end up with edge cases. [...] You still need to see that you break, and then react, and then fix it” [D6]). **EXPLORATORY INVESTIGATION: INTERPRETATION** **Teamwork Collaboration Is Coordination** The terminology used in many disciplines [51] defines coordination as “managing dependencies between activities,” and collaboration as “peers working together.” In this light, what participants consider as collaboration in teamwork is mostly coordination, needed to organize the individual work toward achieving a shared goal. By analyzing the data from the interviews, coordination emerged—after individual work—as the dominant interaction level when working in team, rather than collaboration. In particular, our participants described that: 1. They spend most the time doing individual work; 2. Most of their interaction is to coordinate (e.g., through daily stand-ups); 3. In their work, collaboration happens infrequently and on a need basis (e.g., with (bi-)weekly sprint meetings). 4. Most of the time, their intention for collaboration is coordination leading to individual work. By abstracting the explanations of interviewees, we model developers’ interaction in three levels (from lowest to highest degree of interaction): individual work, coordination, and collaboration. Individual work corresponds to no interaction (e.g., a developer working on her own), while collaboration means developers working together at the same time on the same (sub)task (e.g., pair programming). Coordination is an intermediate level of interaction, where developers interact but are not working together on the same (sub)task (e.g., during task assignment). An activity is a working state that can be placed as a point on the corresponding level of interaction. In a set of activities done to accomplish a certain subtask (i.e., ‘working situation’), particular events often serve as a transition between levels of interaction, for example, steps from individual work to coordination (e.g., “[when a file is locked] we just [ask]: ‘hey what are you working on? And then when you think I can do it?’ to the author.” [D2.10]), and from coordination to collaboration (e.g., “sometimes we […] get together and talk about [similar things], then realize how we can do these things together and do them together” [D11.45]). Figure 2 depicts our model of developers’ interactions. ![Figure 2. Model of developers’ interactions in working situations](image-url) Figure 2 shows two working situations: In the first (WS1), a developer doing individual work asks another developer to make a change in their code (e.g., “I asked one of the guys: ‘[…] I need a method that would return [a special object], can you write [it] for me?’ He was able to write [it] and knew exactly where to go” [D3.(09,15)]). This requires going from individual work (A1) to coordination (A2) when asking to make a change to the other, and back to individual work (A3) when they reach an agreement, without reaching a state of collaboration. In the second situation (WS2), two developers decide to work together on a subtask. This requires moving from individual work (A4) to coordination (A5) when they decide, then to collaboration (A6) for the actual work. The steps between the different levels of interaction in the model are not necessarily discrete: Intermediate interaction levels can be reached. For example, while the activity of task assignment can generally be placed on the coordination level, when the task assignment is discussed together in a meeting it can be put on a level between coordination and collaboration. Implications Our participants report that most of their time is spent in doing individual work, while, unexpectedly, they report to spend very little time collaborating on the same subtask. A direct consequence is that interactions revolving around coordination are a more actionable area, with better research opportunities and with greater potential impact, than areas considering purely the collaboration aspects. For example, better support to communication would have more relevance than tools for concurrent real-time coding. In addition, techniques for supporting information sharing among developers should take into account that developers spend most of their time doing individual work. Considering that most of this individual work is spent in the development environment (the IDE) [48], tools that support coordination within the IDE have potential to lead to greater impact. The role of information. In our study, we uncovered how available information was pivotal in transitioning between levels of interaction (Figure 3). This happened when our participants acted on information they had already acquired earlier, reacted to incoming information, or sought out to gather new information, for example, through communication or by understanding code changes done by colleagues. Researchers have been studying the importance of information sharing for teamwork over the years from different angles. For example, Cataldo et al. introduced the notion of socio-technical congruence, and provided evidence that developers who share information when making changes in dependent software components complete tasks in less time [15]. Other researchers similarly showed that missing information correlates with coordination problems and project failures [14, 63, 47]. Ko et al. analyzed the information needs of developers in collocated teams and found they have difficulties in answering questions related to information about colleagues’ work and software component dependencies [45]. From our interviews, developers reported to know how to deal with the investigated scenarios involving imperfect information, except when they received an internal breaking change. We suggest that this is connected to how easy it is to access the information they need to address the problem. Analyzing the ways developers/teams successfully deal with a condition of imperfect information, we see that the solutions to the problematic scenarios require information to be shared in two ways: (1) via direct communication (e.g., during a meeting), and (2) by making it visible (e.g., in a tool). <table> <thead> <tr> <th>Scenario</th> <th>Needed Information Communicated</th> <th>Visible</th> </tr> </thead> <tbody> <tr> <td>Task assignment</td> <td>✓</td> <td>✓</td> </tr> <tr> <td>Simultaneous changes</td> <td>❌</td> <td>✓</td> </tr> <tr> <td>Breaking changes</td> <td>❌</td> <td>❌</td> </tr> </tbody> </table> Table 2. Information in investigated scenarios Table 2 shows that for non-problematic scenarios, the needed information is communicated or visible. In the case of task assignment, the inefficiencies are avoided by centralizing the task assignment to the team leader, who has all the information “visible” in mind, or by conducting group meetings in which the information is communicated. Other researchers report evidence of this behavior: Begel et al. described that industrial program managers and developers have regular team meetings to effectively prioritize bugs and to coordinate component completion schedules [8]; and Guzzi et al. reported that when open source software developers meet in person, they manage to advance the coordination of their projects better [31]. In the case of simultaneous changes that were not avoided with the team policies (i.e., through modularization and technical locks), the information necessary to solve the merge conflict is immediately visible through the merge tool. In their analysis of parallel development practices, de Souza and Redmiles similarly reported that issues were averted through the mediation of configuration management tools [22]. In the case of breaking changes, we suggest that the needed information is neither communicated in time nor easily accessible/visible. As a result, developers can spend a long time finding the information they need to coordinate. This is in agreement with other studies that report how breaking changes are due to missing information and lead to significant loss of time (e.g., [61]). Implications Our analysis showed that the efforts spent in gaining the information developers are missing can be a source of negative emotions. This underlines the importance of information sharing practices, both active (e.g., communicated) and passive (e.g., visible via a tool). Researchers proposed a number of tools (e.g., Palantir [65] and FASTDash [9]) to detect merge conflicts and tested them in laboratory settings with seeded conflicts. These tools helped developers to spend less time to resolve conflicts and encouraged communication. An interesting venue for future research is to verify the overall impact of these tools on teams whose structure maps the software architecture, as our participants reported not encountering this issue. In addition, in most of our investigated scenarios, we observed that—unexpectedly—developers already had means to deal with missing information, or did not considered these scenarios as issues. In contrast, the results of the study by deSouza and Redmiles put in evidence the significant differences that two unrelated companies have when they deal with the management of dependencies and the underlying information sharing [22]. This suggests that what is considered a critical issue for a company/project could not be important for another. As a consequence, it is important, when investigating potential problems generated by lack of information, to first study whether and how the target developers already employ any method to supply this missing information. Changes and dependencies As de Souza and Redmiles explained: “it is not possible to study changes in software systems without studying dependencies” [22]. In this light, our analysis of coordination and receiving changes is related to the work by Begel et al. [8] and by de Souza and Redmiles [22]. Begel et al. conducted a survey of Microsoft developers, testers, and program managers to see how these coordinate on dependencies (e.g., tasks) within the same team and among different teams. The study reported that most Microsoft developers (63%) minimize code dependencies to mitigate problems. This is similar to our interviewees who use software modularity to avoid inefficient task assignment or merge conflicts. Similarly to our findings, Begel et al. also reported that lack of communication often led to coordination problems, and that email is the common means by which developers kept track dependencies. In contrast, our study outlines the difference between internal and external dependencies and changes. Begel et al. found that internal dependencies are managed by “send[ing] an email and pay[ing] a personal visit to the person blocking their work,” [8], and surveyed developers do not report any negative emotion. Our findings underlined that, in the case of internal breaking changes, the process preceding the communication with the person blocking the work (i.e., finding the source of the problem) is cause of dissatisfaction and frustration in cases where the expected communication did not take place. Moreover, the two studies present different definitions of external dependencies and breaking changes: (1) According to Begel et al., dependencies are ‘external’ if in different teams within the same company, with which it is possible to communicate personally; (2) according to our findings, dependencies are ‘external’ if in different companies, with which it is extremely difficult to communicate. In the former case, Begel et al. reported that developers have to maintain personal communication with external teams to remain updated of changes, and the existence of unexpected changes from external teams generates anxiety. In the latter case, our interviewed developers did not report anxiety (even though unexpected changes happen and lead to loss of time), rather acceptance of the situation as part of the natural business model of the industry. In their work, de Souza and Redmiles investigated the strategies software developers use to handle the effect of software dependencies and changes in two industrial software development teams [22]. The two teams deal with internal dependencies according to our definition. One team (MVP) allows parallel development and the modularity of the system is low, the other team (MCW) focuses on modularity by using a reference architecture. Our interviewed developers have complains similar to those in the MCW team, which also has strikingly similar practices: In both studies these teams avoid inefficient task assignment with modularity, their developers have problems identifying their impact network (they do not know who is consuming their code or whether changes can modify the component they depend on) and are only interested in changes in particular parts of the architecture that impact them. Moreover, developers in both MCW and our study have expectation that major changes are accompanied by notifications about their implications, yet are worried about information overload resulting from too frequent notifications. Conversely, the MVP practices seems to align with our participants’ description of an ideal scenario where emails are sent to update about changes, everybody reads notification emails, management urges developers to notify about breaking changes, and such email even suggest courses of action to be taken to minimize the impact. As a result, despite the parallel development, coordination in MVP seems smoother than in our developers’ experiences. One important characteristic of MVP, mentioned by de Souza and Redmiles, is that most developers have worked on the project for at least two years, and their experience could also be the cause of the difference with MWC, which is a newer project. Our results, though, do not seem to corroborate this hypothesis, since interviewed developers reported similar issues regardless of project maturity and personal experience. Our additional analysis of code changes looks at coordination from a low level perspective; we found that most information developers need to coordinate is typically available, but not necessarily accessible. Implications Our study confirms that lack of coordination leads to late discovery of unexpected errors or behaviors in the code, followed by a time-consuming analysis to find the code changes that are the source of the issue. This calls for better support for coordination when developers make and receive changes, and for when they need to investigate a change to determine its impact. As the existing body of research suggests, impact analysis and support for change understanding in software development scenario remains problematic. Research prototypes have not yet reached widespread usage in the IDE, and our findings underlines the substantial practical relevance of further research in these areas. The differences between coordination practices between our interviewees’ teams and the MVP team described by de Souza and Redmiles [22] are an interesting venue for future research. In fact, compelling is the hypothesis that the modularity adopted by our interviewees’ teams and MWC could create asymmetries in engineers’ perceptions of dependencies [30], thus being at the basis of the differences and generating the reported coordination issues. By investigating how developers currently handle received code changes in the IDE, we realized that they do many tasks manually, and spend a lot of effort to collect and remember change information. The data that would help developers in their tasks is available (e.g., data recorded by the versioning systems), but not easily accessible. This implies that better support for integrating change information in the IDE is needed and it would impact development and coordination. DESIGNING AND EVALUATING BELLEVUE Building upon our findings from our interviews, we aimed to design a tool to help developers anticipate, investigate, and react to breaking changes in a collaborative development environment. Figure 4 outlines the process. Design requirements We first analyzed the current approaches for receiving changes in the IDE under the light of widespread usability heuristics [56] (Point 1 in Figure 4). We found several unmet heuristics that, together with the data collected in the exploratory investigation, we used as a basis to derive requirements for our IDE extension to improve receiving changes and support teamwork (Point 2). Recognition over recall “Memory for recognizing things is better than memory for recalling things” [49]. Once a developer decides to merge a received change with the local version, the information about the integrated change disappears. For this reason, when developers encounter a bug, they must recall which change occurred and whether any of them could have generated the problem. One participant explained that the frustration when he encounters a bug comes from “figuring out where the problem is: Trying to figure out what really has changed” [D5]. We suggest that when looking for the cause of a bug, developers’ memory can be aided by tools to navigate change history, but existing tools require to switch from the current development context, and typically give the information outside of the development context. Visibility of system status “The system should always keep users informed about what is going on” [56]. Once changes are integrated, development tools provide no distinction between code already present before merge, and the newly integrated one. Therefore, there is no clear visibility of the system status with respect to its history. “It’s kind of impossible to know every single line of code that everybody else on your team changed” [D3]. While historical information is available, it typically resides in dedicated tools or views, out of the current development context, thus the status is neither self-evident nor easily accessible: “there isn’t really an easy method […] that let you see [that] these ten files are different from what you had in your current time” [D5]. Clearly Marked Exits “A system should never capture users in situations that have no visible escape” [55]. In software engineering, code repositories typically provide change history which gives developers an escape: If they find something not working after they merged some changes into their local working copy, they can roll back to the status prior to the merging. Problems with this approach are: (1) The exits are not evident, and (2) the exit strategy is binary. The first issues means developers sometimes do not realize that their problems could be addressed by undoing the merge, instead of trying to find an error in their own code. The second issue means that developers can only undo all the merged changes at once, although the error can be caused by a mistake in a small fraction of the changed code. Once the code is rolled back, developers have to reconsider all the undone changes and realize which ones could have caused the error, without having the full IDE context at disposal, but only the change information, and integrate all the unrelated changes back again. D1 explained: “It’s a loss of time, we have to roll back, figure out [what the problem was], and roll again. It’s a loss of time, definitely.” Help and Documentation “[Documentation should] be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large” [56]. In development processes, documentation also consists in the explanations software developers write as they commit their changes to the shared repository. It also includes other sources of information, such as descriptions of work items or bugs, stored in bug management or work item management tools. These pieces of information are accessible to the developer, and the commit messages are available to inspect upon receiving code changes, but once the changes are integrated they disappear, unless the user performs a number of steps navigating the history of the code in specialized windows or applications. Additionally, comments in the code commit and the work management tools are often disjointed. For example D5 complained: “When you get the latest [changed files] you get tons of files”; he found it very difficult to search the necessary help or information due to information overload. Finally, when developers integrate more than one commit into their local copy, often they see only the last commit message, even though a line of code could have been changed several times between their local copy and the current status. Help users recognize, diagnose, and recover from errors Current code change management in IDEs make it difficult to recognize and diagnose errors generated by integrated code changes, because they are not visible and the history has to be analyzed outside of the current development context. One interviewed developer explained that, despite the availability of external history tools, “one of the problems is trying to figure out what really has changed [and] what’s the impact on your code” [D5]. In fact, as D3 explained, external tools are not helpful because “version control gives you a list of files that changed and not the specific lines”. Seeing exactly which part changed and how takes many steps. Moreover, the only possibility to recover from errors is to do a complete undo of the merged changes, while it might be be enough to modify a small part of code to fix the error. System design requirements To address our current concerns with imperfect or missing information in development tasks, we suggest the following requirements for development tools: 1. Received code changes should always be visible, 2. Information should be provided in context, both semantic (code) and procedural (history, project) without undue actions by the user, both at the project and file level. 3. History of code chunks should be easily accessible, possibly using progressive disclosure to prevent information fatigue. 4. Error identification and diagnostics should be supported through a fluid integration of code history. 5. Code changes should be reversible at the sub-file level. 6. Requiring context switches to acquire the necessary knowledge to solve a task should be avoided. Prototype and evaluation Consequently, we devised an IDE extension, named BELLEVUE, to fulfill the requirement outlined above and to serve as a tool to explore our preliminary design ideas (Point 3 in Figure 4). The prototype allowed us to communicate our ideas to various experienced designers and practitioners at Microsoft, and to get their feedback, reveal early problems, and improve the initial concept (Point 4 in Figure 4). We devised a detailed storyboard including a high-fidelity prototype of BELLEVUE (Point 5). This was implemented as a PowerPoint presentation with a sequence of believable action steps of interaction with the prototype. Each step was devised to let the participants of the evaluation phase observe what was happening, explain what they would do, and describe the effects they would expect as a consequence of their actions. We used this prototype to evaluate BELLEVUE with professional software developers, using the RITE (Rapid Iterative Testing & Evaluation) method [S4], to evaluate and identify problems in the prototype, quickly fix them, and then empirically verify the efficacy of the fixes (Point 6). Participants in the RITE study were selected among a population with the following characteristics: More than three years as a professional developer, more than one year in the current company, and more than three months in the current team. Moreover, interviewees had to spend at least 20 hours per week on writing and editing code, their team had to use 1. a source control management system, and they had to have at least browsed the change history, encountered a change merge, or used the file diff comparison view in the month before the RITE. Evaluation invitees were thanked for their participation by a gratuity in the form of Microsoft software. Each session occurred in a full usability lab on the Microsoft campus, and was video recorded for later analysis. To mitigate the moderator acceptance bias [28], we explained that the researcher guiding the session did not create the product. Moreover, to mitigate any social desirability bias [28], and to encourage discussion, the storyboard plot was describing the actions taken by a proxy developer named James. Following the storyboard plot described by the slides and the researcher, participants were solicited to follow a think-aloud protocol, and indicate what they saw, would do, and would expect as a result of their actions on each screen page. After 9 iterations we reached a stable and validated design. At the end of the process (Point 7 in Figure 4), we had: (1) the finalized BELLEVUE prototype, (2) a set of changes to implement but that were not eventually integrated, and (3) a set of candidate aspects to be investigated as future work. We designed BELLEVUE as a prototype code editing and navigation solution aimed at being a lightweight, ready to be used, without requiring developers to change their working style. It takes the historical change information that is already available, but currently neither visible nor easily accessible, and displays it in a non-obtrusive way. BELLEVUE offers an interactive view that shows detailed historical information for files and specific chunks with respect to a previous version. We detail the features of BELLEVUE, as they were at the end of the RITE phase, and the feedback from participants (mentioned as R1–9). The final version of the slide-deck used in the RITE is available as a file accompanying this paper. Recognizable changed files and blocks BELLEVUE decorates changed files with an arrow (Figure 5, Point 1), and denotes changed lines with a blue2 colored sign, both at a fine-grained granularity (Point 2), to see them in the context of the current text window, and a more coarse-grained one (Point 3), to see them in the context of the entire file. One can decide (Point 4) to see only the files that were just merged into the current local version. This design supports recognition over recall: Once new changes are merged into the local version, their traces remain visible. It also enhances the visibility of the system status, with respect to changes. RITE participants’ feedback—All the participants appreciated this feature. In particular, they liked that it helps filtering out irrelevant information when looking for the reason of an error that could have been introduced by a received change: “Knowing what I can ignore is huge; the larger the project, the more beneficial it comes” [R1]. Concerning the way in which changes are made recognizable, some users did not find it intuitive, or appropriate: “I’d prefer a bar or something 1Also available at: http://www.st.eil.tudelft.nl/~guzzi/ 2This color has been chosen because it is currently considered a neutral color in the IDE. As opposed to green or red, which are often associated to versioning systems or debuggers. much more visible [than a blue-colored sign] to see that it’s different” [R2]. Nevertheless, after they continued in the scenario and experienced the following features of BELLEVUE, they withdrew their concerns. Some participants suggested to let the users personalize the color to denote changes; other participants suggested to use different colors to clearly distinguish added, removed, or modified lines, as it currently happens in tools that display code differences. **Visible changes’ effect** To show the effect of the change in the code, the user can hover on any colored block to see the latest changes. For example, in Figure 6, the user decided to look at the changed block that was not visible in Figure 5. Then, by hovering on the colored sign on the left (Point 5), (s)he can see the effect of that change: The argument of the RemoveAt method call has changed (Point 6), and the Refresh method call has replaced a call present before on the same object (Point 7). **RITE participants’ feedback**—This feature was introduced in the third iteration of the tool, after considering the feedback received by the first participants. As an example, one participant had some expectations when hovering the lines indicating a change: “toggle to highlight what’s different from the last version, to quickly diagnose, I don’t need a true side by side” [R3]. Once introduced, this feature was well received by all the remaining participants (e.g., “ok, good! I can see here how [this part] changed!” [R6]), because it also helps with the progressive disclosure of the information about the changes: Users can quickly verify whether the changes seem relevant and, only if necessary, investigate more. **Accessible historical details** In BELLEVUE the user can see the code history of any block that was changed with respect to the previous local version. This is achieved with one click on the colored sign on the left of the interesting block. For example, in Figure 7, the user decided to further inspect the history of lines 142–143 because they led to an unexpected behavior. Once the block is clicked, a pane appears from the bottom (Point 8): It contains the historical details of the changes happened to that block since the last update of the user. Each item represents a change and shows the changed code with respect to the previous commit (Point 9), the commit message to document it (Point 10), and the contact information of the change author (Point 11). The changed code only regards the chosen block, but it is possible to extend it by clicking on the ‘...’ before and after. It is also possible to inspect also previous history (Point 12). **RITE participants’ feedback**—As for the other steps, before showing what would happen next, the interviewer asked the participants how they would interact with the design and what their expectations would be. In particular, for this feature, the interviewer asked what participants expected it would happen by clicking on the colored sign on the left (Point 5). In this way, we learned that the participants wanted to have something similar to a diff with the previous version (e.g., “I’d do a compare versus the previous version, and just look at those particular changes” [R3]). The BELLEVUE solution was, thus, very much appreciated and it often exceeded their expectations: “All the details! This is exactly what I was looking for: It tells me who […] and it tells me what did each one, and how long ago!” [R1]; “oh I see, so this is exactly what I was looking for. It’s even better!” [R8]. Seeing the version that could have introduced the error (i.e., #9044) was a clearly marked exit: Some participants considered to recover the error by reverting that particular change, because that would not imply reverting entirely to a more complex change set. Through the iterations, we added the clickable revision number (to open a panel to see all the changes in a revision), and the hovering function to show the full commit comment. Participants’ suggestions that we did not eventually include in the iterative evaluation, due to time reasons, mostly regarded the possibility of selecting specific parts to see the history, instead of the contiguous block (e.g., “I want to see the whole function [history, by] right clicking on a function” [R2]). **Editable code** BELLEVUE allows editing code while reviewing the history (Figure 8), because it integrates history within the active editing context. It also highlights the new code differently (Point 13 in Figure 9) and automatically adds a new item to the list (Point 14) to put it in its historical context. This differs from current approaches for visualizing history, which involve opening a distinct context or application, and do not make it possible to edit code (e.g., to fix a bug) and see history in the same context, at the same time. RITE participants’ feedback—This feature was very well received by all the participants. In particular, many were positively surprised and realized the possibilities of having code changes better integrated in the IDE: “I have a diff view, but I am not trapped in that […] I got my editor and my diff view, so the split view is very very helpful […]. Let me do what I want to do, while looking at the information I needed to make my change” [R1]; “Now that I see, I know what is happening […]. That is intuitive to me: Just clicking, edit, and go” [R7]. They also appreciated the immediate feedback of the change in the local history (Figure 9): “Oh, I like it shows it’s local” [R4]. ![Figure 9. New local change added to history](image) **Contacting change’s author** The author’s photo and contact pane is inspired by CARES [32], a tool to help developers discover and choose relevant colleagues to speak with when seeking information or coordinating action. In BELLEVUE, the communication icons are for contacting the author of a given commit. Figure 10 shows the email template automatically generated by clicking on the email icon for commit #9044: it includes the diff with the previous commit, for easier reference. ![Figure 10. Contacting the author of a change from the IDE](image) RITE participants’ feedback—The social interaction within the view was extremely well received by all the participants. They especially appreciated the possibility of quickly using email and IM: “I really like that. I’d click on chat” [R6]. When discussing the email they would write to the author of the buggy change, they all specify the things they would like to ideally see in the email, and when they see it, they like how it includes everything they wanted: “That is perfect. [It is] exactly what I would have sent” [R1]. However, some would have liked to obtain the diff view as BELLEVUE shows it in the IDE, while “now it’s like standard diff” [R6]. Participants’ suggestions not integrated for time reasons are: Adding an email all feature, change the email title to give information about method and class in which the new change is taking place, support for copy and paste from history to email, and add communication clients (e.g., IRC or Skype). Evaluation Debriefing After each RITE session, participants filled two short questionnaires about their experience with the tool: A System Usability Scale (SUS) questionnaire [11] and a proprietary 7-point Likert scale questionnaire standardly used at Microsoft. The SUS answers were overall positive: The mean SUS score is 85.1 (answers had $\sigma = 0.66$, on average), which is considered a high value across different domains [5, 6, 67]; as an example, the statement “I think that I would like to use this product frequently” scored 4.7/5.0 ($\sigma = 0.50$). The proprietary survey was equally positive: Mean score was 5.4/7.0 (the higher the better: items only included positive wordings [40]), with $\sigma = 1$ on average. For example participants gave 5.4/7.0 ($\sigma = 1.13$) to the statement “This product has powerful functionality and excels at what it was designed for” and “This product is something I am likely to share information about” scored 5.9/7.0 ($\sigma = 0.78$). COLLABORATIVE SOFTWARE DEVELOPMENT TOOLS Coordination in software development has been studied in the fields of Software Engineering and Computer Supported Cooperative Work since the 1980s, and researchers have produced a wide range of analyses and tools [62]. BELLEVUE uses historical change information to support developers’ coordination. Sarma et al. present a comprehensive review of coordination tools and defines a framework that classifies those technologies according to multiple coordination paradigms [66]. In this framework, tools such as versioning systems and issue tracking systems support the development process and are at the basis of the more sophisticated tools that provide meaningful and automatically aggregated information: These are research prototypes and industrial applications conceived to better support developers coordination in the IDE. Such tools includes full-fledged platforms, specific workspace awareness solutions, information discovery approaches, and code information visualization tools. Full-fledged platforms, such as Jazz [39] and Mylyn [25], are at the far end of the spectrum in terms of complexity [66], and aim at transforming the IDE experience. Jazz, or Rational Team Concert, is an IDE platform, built on top of Eclipse and Visual Studio, that integrates several aspects of the software development process, including integrated planning, tracking of developer effort, project dashboards, reports, and process support. Relations between artifacts can be defined and leveraged to gather project information. Jazz also offer support for communication within the IDE (e.g., instant messaging), more advanced than BELLEVUE. Mylyn and its successor, Tasktop Dev [70], are based on Eclipse and Visual Studio and use task context to improve the productivity of developers and teams [43]: for example, they reduce information overload by providing developers with just the artifacts and information necessary for their current code modification task, and offer a comprehensive task repository to support teamwork by sharing information on tasks and their context. Both platforms support the creation of novel information (e.g., tasks and work items, and relations among artifacts) to support developers productivity, and encourage a task or work item based approach to evolution. BELLEVUE aims at using already available data and visualizing it in a non-obtrusive way. Another example of improved communication in the IDE is REmail [4], which integrates developers’ email communication in the IDE to support program comprehension; REmail can be used in conjunction with BELLEVUE to extend the communication feature of the latter. Workspace awareness solutions, such as Palantir [64], Lighthouse [19], CollabVS [23], Syde [34], and Crystal [12] are concerned with changes before they are committed to the source code repository, to address the conflict detection or real-time development information. For example, Syde tracks fine-grained real-time changes and alerts developers on the code editor and on a view when potential conflicts are emerging. Given the goal of these tools, differently from BELLEVUE, they do not show change history related information. Interestingly, BELLEVUE design could be included in environments such as Mylyn and Jazz, and could be used concurrently with workspace awareness tools, in order to offer coordination support from a complementary perspective. Information discovery approaches, such as Ariadne [20] and Tesseract [63], seek and assemble information to perform tasks such as expert finding and socio-technical network analysis. Recommender systems, such as Seahawk [57, 58], exploit change information and externally generated data to support software development and comprehension. Similarly to BELLEVUE some of these approaches also use historical code information to inform their users. Given their goal, they offer different, complementary views on data and integration with the development environment. Code information visualization tools include the “blame” functionality offered, for example, by git or svn. This feature allows to see who did the last change on each line of code of a file, and when. Another tool is the concept presented by Rastkar and Murphy, in which the developer is able to see for a summary of commit messages connected to a line of code in the IDE [60]. In contrast, BELLEVUE offers an interactive view that shows detailed historical information for specific chunks with respect to a previous version. BELLEVUE always displays which files and lines changed, so it does not require the developer to actively ask for the commit message of the line, because the developer may not be already aware of the relevance of the file and the line. In our exploratory investigation narrowing down a breaking change to the file and line causing the issue emerged as one of the most problematic and time-consuming efforts for developers. FINAL REMARKS In our study we explored how to support developers’ collaboration in teamwork. We focused on teamwork in the software implementation phase, which takes place in the IDE, and we conducted a qualitative investigation to uncover actionable areas for improvement. We identified internal breaking changes as one of the most important areas for improvement, because current IDE support for receiving changes is not optimal. Consequently, we designed BELLEVUE to enable developers better coordinate, by making historical information visible and more accessible in the IDE. Overall, this paper makes the following main contributions: 1. A qualitative analysis indicating that teamwork needs mostly regard coordination, that developers are able to face scenarios considered problematic in literature, and that dealing with breaking changes is hard, but it only generates frustration if the breaker is internal to the project. 2. Recommendations on how to improve collaboration in teamwork in the software implementation phase, such as to focus on interactions revolving around coordination rather than on collaboration on the same (sub)task. 3. Requirements for a tool to support teamwork based on currently unmet usability heuristics and the results of our qualitative analysis. For example, to favor recognition of code changes over recall, and to increase the visibility of the codebase status with respect to received changes. 4. The design and evaluation of BELLEVUE, an IDE extension to support teamwork by improving the integration of code changes in the IDE. BELLEVUE makes received changes visible inside the editor, and makes the history of code chunks easily accessible using progressive disclosure. ACKNOWLEDGMENTS We want to express our gratitude to the anonymous reviewers, whose valuable comments significantly helped to improve the paper. We warmly thank Andrew Begel for his first-class feedback on the revision of this paper, and Monty Hammon-tree for his support during Anja’s internship. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/9303152/cscw2015.pdf", "len_cl100k_base": 13763, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 55071, "total-output-tokens": 16769, "length": "2e13", "weborganizer": {"__label__adult": 0.00033736228942871094, "__label__art_design": 0.00031304359436035156, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.002063751220703125, "__label__entertainment": 4.762411117553711e-05, "__label__fashion_beauty": 0.0001323223114013672, "__label__finance_business": 0.00029587745666503906, "__label__food_dining": 0.00029206275939941406, "__label__games": 0.0004265308380126953, "__label__hardware": 0.00040221214294433594, "__label__health": 0.0003135204315185547, "__label__history": 0.00016963481903076172, "__label__home_hobbies": 8.296966552734375e-05, "__label__industrial": 0.0002181529998779297, "__label__literature": 0.0002137422561645508, "__label__politics": 0.00024437904357910156, "__label__religion": 0.0003509521484375, "__label__science_tech": 0.0022754669189453125, "__label__social_life": 0.00012993812561035156, "__label__software": 0.004673004150390625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00027108192443847656, "__label__transportation": 0.00038743019104003906, "__label__travel": 0.00018846988677978516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73830, 0.02581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73830, 0.2756]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73830, 0.9313]], "google_gemma-3-12b-it_contains_pii": [[0, 816, false], [816, 5919, null], [5919, 11977, null], [11977, 16207, null], [16207, 22611, null], [22611, 28209, null], [28209, 34064, null], [34064, 40696, null], [40696, 45495, null], [45495, 52047, null], [52047, 55693, null], [55693, 59175, null], [59175, 65697, null], [65697, 72818, null], [72818, 72818, null], [72818, 73830, null]], "google_gemma-3-12b-it_is_public_document": [[0, 816, true], [816, 5919, null], [5919, 11977, null], [11977, 16207, null], [16207, 22611, null], [22611, 28209, null], [28209, 34064, null], [34064, 40696, null], [40696, 45495, null], [45495, 52047, null], [52047, 55693, null], [55693, 59175, null], [59175, 65697, null], [65697, 72818, null], [72818, 72818, null], [72818, 73830, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73830, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73830, null]], "pdf_page_numbers": [[0, 816, 1], [816, 5919, 2], [5919, 11977, 3], [11977, 16207, 4], [16207, 22611, 5], [22611, 28209, 6], [28209, 34064, 7], [34064, 40696, 8], [40696, 45495, 9], [45495, 52047, 10], [52047, 55693, 11], [55693, 59175, 12], [59175, 65697, 13], [65697, 72818, 14], [72818, 72818, 15], [72818, 73830, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73830, 0.04678]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
54c968e75ef46034872e6e44a9c2c49c64f52154
Work Practices and Challenges in Pull-Based Development: The Contributor’s Perspective Georgios Gousios Radboud University Nijmegen Nijmegen, the Netherlands g.gousios@cs.ru.nl Margaret-Anne Storey University of Victoria BC, Canada mstorey@uvic.ca Alberto Bacchelli Delft University of Technology Delft, the Netherlands a.bacchelli@tudelft.nl ABSTRACT The pull-based development model is an emerging way of contributing to distributed software projects that is gaining enormous popularity within the open source software (OSS) world. Previous work has examined this model by focusing on projects and their owners—we complement it by examining the work practices of project contributors and the challenges they face. We conducted a survey with 645 top contributors to active OSS projects using the pull-based model on GitHub, the prevalent social coding site. We also analyzed traces extracted from corresponding GitHub repositories. Our research shows that: contributors have a strong interest in maintaining awareness of project status to get inspiration and avoid duplicating work, but they do not actively propagate information; communication within pull requests is reportedly limited to low-level concerns and contributors often use communication channels external to pull requests; challenges are mostly social in nature, with most reporting poor responsiveness from integrators; and the increased transparency of this setting is a confirmed motivation to contribute. Based on these findings, we present recommendations for practitioners to streamline the contribution process and discuss potential future research directions. Categories and Subject Descriptors D.2.7 [Software Engineering]: Distribution, Maintenance, and Enhancement—Version control; D.2.9 [Software Engineering]: Management—Programming teams Keywords pull-based development, open source contribution, pull request, distributed software development, GitHub 1. INTRODUCTION Distributed software development projects employ collaboration models and patterns to streamline the process of integrating incoming contributions [36]. The pull-based development model is a recent form of distributed software development [25] that is gaining tremendous traction in the open source software (OSS) world. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ICSE ’16, May 14 - 22, 2016, Austin, TX, USA © 2016 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-1-4503-3900-1/16/05... $15.00 DOI: http://dx.doi.org/10.1145/2884781.2884826 As Figure 1 shows, its popularity is constantly growing; in January 2016, 135,000 repositories on the GitHub social coding site received more than 600,000 pull requests. In total, 1,000,000 collaborative GitHub projects (i.e., 45% of all collaborative projects) used at least one pull request during their lifetime. As opposed to more classic ways of contributing (e.g., change sets sent to development mailing lists [4] to issue tracking systems [3] or through direct access to the version control system [17]), in the pull-based model, contributors fork (i.e., locally duplicate) the main repository of the project they want to contribute to, make their changes independently, then create a pull request (PR) to ask that their changes be merged into the main repository. Then the members of the project’s core team (the integrators) are responsible for evaluating the quality of the contributions, proposing corrections, engaging in discussion with the contributors, and eventually merging or rejecting the changes. Social coding sites (e.g., GitHub [20], Bitbucket [6], and Gitorious [21]) offer the pull-based development model in conjunction with social media functions, which allow users to subscribe to and/or visualize information about activities of projects and users and offer threaded asynchronous communication within PRs. To grasp the complexity of the pull-based development model offered by social coding sites, it is necessary to examine it from multiple perspectives. Previous research considered the lifetime characteristics of PRs [35], macroscopic factors that lead to contribution acceptance [23, 35], the barriers faced by first time contributors [44], how contributions are evaluated through discussions [45], and the working habits and challenges faced by integrators [27]. Here we present the contributor’s perspective by investigating contributors’ work habits and the challenges they face. The overall goal with this work is to understand how contributing to OSS projects works using the pull-based development model in the context of social coding sites. Understanding the contributor’s perspective is needed to reveal weaknesses with the pull-based model and to guide the design of tools and processes to support their work, which is an essential part of the workflow. More- over, this understanding is required to help project owners address the weak aspects of their development process and to take action against the barriers that contributors face. To achieve our goal, we performed an exploratory investigation of contributors to projects hosted on GitHub. We conducted an online survey that was answered by 645 top contributors to active projects, and we analyzed the results with traces from GHTorrent [24]. Since GitHub hosts diverse projects developed by many different programmers, it gives us the opportunity to learn from a variety of cases. We found that contributors have a strong interest in staying aware of project status to get inspiration and to avoid duplicating work, but they do not actively try to propagate information about the pull requests they are preparing. The toughest and most frequent challenges encountered by contributors are social in nature, mostly related to poor responsiveness and a lack of empathy from integrators, and difficulties in communicating change rationale. In particular, the communication within pull requests, although effective for discussing low-level issues, appears to be limited for other types of contributors’ communication needs. Finally, considering the transparency offered by social coding sites using the pull-based model, our respondents reported (in line with previous findings [11]) that building code portfolios is a motivation to contribute. The same transparency seems to encourage developers to review their changes before submission, but also triggers a fear of rejection that might harm their reputation. 2. BACKGROUND AND RELATED WORK OSS projects form online collaborative communities [15] where developers wishing to participate will submit contributions, usually as source code patches. The onion model is a widely accepted way of organizing OSS communities by contributions [52]: a core team (with an optional leader) receives contributions and determines their fate based on technical merit or trust. Despite a project’s best intentions, newcomers to OSS communities occasionally face challenges. Steinmacher et al. [44] analyzed related work and identified 58 barriers, most of which relate to social aspects such as community engagement and the need for orientation. After contributions have been submitted, they must also be evaluated. In an early study, Mockus et al. [36] described the commit-first contribution evaluation pattern: code must be in the repository before it is reviewed. Rigby and Storey examined the peer review process in OSS mailing lists and found that developers filter emails to reduce evaluation load, prioritize using progressive detail within emails containing patches, and delegate by appending names to the patch email recipients [21]. Baysal et al. [3] examined contribution evaluation over the bug tracking database and found that contributions from casual contributors received preferential treatment, which they attribute to the size of the contributions (i.e., new contributors submit smaller contributions). A number of social factors also affect how developers interact with the project community in order to have their contributions evaluated. Duchmeaut found that developers hoping to get their contributions accepted must first be known to the core team [13]; core team members use a developer’s previous actions as one of the signals for judging contributions. Similarly, Krogh et al. [49] found that projects permit developers to contribute through established implicit “joining scripts”, which may permit access to the main repository based on developers’ past actions. Due to increasing popularity, GitHub and the pull-based development approach have attracted the attention of researchers interested in online collaboration and software development practices. Gousios et al. [25] quantitatively investigated the characteristics of contributions in GitHub, finding that contributions are relatively small (20 lines) and processed very quickly (submissions are accepted in less than a day). Moreover, both Gousios et al. [25] and Tsay et al. [46] investigated the factors that influence the acceptance of contributions in GitHub: both found similar processes but different dominating factors (i.e., ‘hotness’ of project area and social distance, respectively). Contribution evaluation is as important in pull-based development as it is in traditional OSS practices. Pham et al. [38] reported initial qualitative evidence on how integrators assess contributions by focusing on the evaluation of testing practices. In a survey of integrators of busy projects in GitHub, Gousios et al. [27] found that integrators struggle to maintain the quality of their projects. They experience difficulties prioritizing the contributions to be merged and face challenges identifying factors that will reveal contribution quality. By focusing on how discussions affect contribution evaluation in GitHub, Tsay et al. [46] found that stakeholders external to the project may influence the evaluation discussions while power plays are in effect. Social signals also play an important role: Morrow et al. [34] found that core members form an impression of the quality of incoming contributions by using social signals such as the developer’s coding activity and the developer’s social actions (e.g., following other developers). Social coding capabilities, in conjunction with pull-based development, improve how projects engage with community members and attract more contributions. In a study of integrators, Dabhish et al. [11] found that transparency drives collaboration as social inferences (around commitment, work quality, etc.) allow developers to more effectively deal with incoming contributions. Similarly, integrators of three large OSS projects hosted on GitHub, interviewed by McDonald and Goggins, reported that the switch to GitHub allowed them to become more democratic and transparent and to attract more participation, resulting in a doubling of the number of contributors [33]. Overall, all studies involving the pull-based development model have focused on projects and integrators. In this paper, we complement this view by analyzing contributors. 3. RESEARCH METHOD The overall goal with this research is to understand how contributing to OSS projects works using the pull-based development model in the context of social coding sites. Our examination of the literature revealed that our scientific knowledge of OSS and the case of pull-based development in social coding mostly considers project or integrator perspectives, or investigates what happens after a contribution has been submitted to a project. We know that the pull-based development model facilitates a more casual relationship with projects by making it easier to send a pull request, while social coding features ease participation in any subsequent discussion. This setting has given rise to phenomena such as drive-by commits [38][44], where developers submit small fixes without expecting any (or at least, limited) compensation or recognition. What is currently lesser known, however, is how contributors prepare (for) a contribution. This motivated our first research question: RQ1: How do contributors prepare (for) a contribution in social coding sites using the pull-based development model? Despite the increased transparency that social coding sites afford, many contributions are still rejected as duplicate, conflicting We were interested in finding out whether contributors leverage transparency in the same ways that integrators do [11]. Do they communicate prior to submitting a PR or does communication occur only post-submission? Moreover, contribution quality is a major concern for integrators [27], we wished to know if there is a match between what integrators and contributors examine to assess the quality of contributions. We refined our first research question as follows: RQ1.1: What do contributors do before and after coding a PR? RQ1.2: How do contributors assess the quality of their PR? RQ1.3: How do contributors communicate about an intended change? Subsequently, we were interested in understanding the challenges that contributors experience when working with the pull-based model in GitHub. We also considered the barriers that make it difficult for new contributors to participate. This exploration is needed to guide future work in this area and led to our last research question: RQ2: What are the challenges of contributing in social coding sites using the pull-based development model? 3.1 Study Design Our study followed a mixed-method approach [10]. Since our aim was to learn from a large number of projects, we used an online survey as it is a data collection approach that scales well [15]. We enriched the gathered data with traces extracted from GHTorrent [24]. We collected survey data in two rounds. In the first round, we ran a pilot survey with a limited number of selected contributors as it allowed us to clarify our questions and to identify emerging themes we could explore further (i.e., we added a question about motivations for contributing and one question on barriers for newcomers). In the second round, we sent the survey—augmented with questions addressing the themes that emerged in the first round—to several contributors of pull requests on GitHub-hosted OSS projects. Survey Design. Pilot and final survey were split into two sections: (1) demographic information and (2) open-ended questions intermingled with multiple choice or Likert scale questions. Usually, the contributor had to answer an open-ended question and then a related one with fixed answers. To further elicit the contributor’s opinions, in all questions that had predefined answers but no related open-ended question, we included an optional ‘other’ response. Throughout the survey, we intentionally used even Likert scales to force participants to make a choice. Excluding demographic questions, the final survey consisted of 4 open-ended questions, 4 Likert scale questions with an optional open-ended response, and 11 multiple choice questions (5 with an optional field)[1]. The vast majority (95%) of respondents completed the survey in less than 10 minutes. Sampling projects and candidate respondents. Previous work has revealed that most GitHub repositories are inactive and have a single user [25][11]. To ensure that our sample consisted of repositories that make effective and large-scale use of PRs, we selected all repositories in the GHTorrent dataset [24] that have received at least one PR per week during the year 2013 (3,400 repositories). For each repository, we extracted the top 3 pull request contributors by the number of PRs they contributed. We sent them an email if their address was registered with GitHub and if they were not integrators in the same repository; we collected 4,617 emails. Attracting participants. For the pilot phase, we randomly selected and emailed 445 of the 4,617 contributors, and received 32 answers (7% response rate). For the main data collection phase, we emailed the remaining 4,172 contributors and received 760 answers (18% response rate, i.e., typical of online surveys in software engineering, where the response rate is usually within the 14–20% range [39]). The survey was published online and its Web address was sent by personal email to all participants. To encourage participation, we created a customized project report for each of the emailed contributors. The report included plots on the project’s performance in handling PRs (e.g., mean close time) on a monthly basis. The reports for all projects have been published online [23] and they were widely circulated among developers. We did not restrict access to the survey to invited users only; several survey respondents forwarded the survey to colleagues or advertised it on social media (Twitter) without our consent. After comparing the response set with the original set of projects, we found that 25% of the responses came through third-party advertising. The survey ran from April 14 to May 1, 2014. Respondents. The majority of our respondents self-identified as project contributors (76%), with 65% working for industry. Most (68%) reported more than 7 years of software development experience and considerable experience (> 3 years) in geographically distributed software development (59%). 3.2 Analysis We applied manual coding [9] on the 4 open-ended questions as follows: initially, the first and last authors individually coded (in a shared online spreadsheet) a different set of 50 (out of 760) answers for each question. At least 1 and up to 3 codes were applied to each answer. Then, the coders met physically, grouped the extracted codes together and processed them to remove duplicates and, in some cases, to generalize or specialize them. The agreed-upon codes were then applied to all the answers (each coder codified 50% of the answers). When new codes emerged, they were integrated in the code set. Another round of code integration followed in a physical meeting, which led to the final result. On average, 20% more codes were discovered in the final integration round. We asked respondents to optionally include their GitHub user name and report a single repository to which they contribute many PRs; 81% (610) and 95% (722) of the respondents did so, respectively. Many responses (126) did not match to a GitHub repository for reasons that ranged from spelling mistakes to using names with wild cards (e.g., *jenkinsci/*). We then treated those contributions to multiple repositories. We corrected the repository names as follows: we first used GitHub’s search functionality to locate repositories whose name was similar to the provided one and then chose the one that had received PRs from the user. When a repository name contained wild cards, we searched the GHTorrent database for all repositories the contributor had submitted PRs to and selected the one where the contributor had submitted the most PRs. We excluded from our further analysis any answers for which we could not obtain a valid repository name (5 answers) and those that did not include a repository name (38 answers). 3.3 Adding Project Metrics After we resolved the repository names, we augmented the survey dataset with information from the GHTorrent MySQL database (version 2015-06-18) [24]. For each project, we calculated the mean number of PRs (mean.prs) and the mean number of integrators (mean.integrators) on a per month basis for the period July 2013 to July 2014. Per metric, we split projects into three equally sized groups (small, medium and large). We also calculated whether respondents belong to the top 10% of contributors (top.10.perc.contrib) for the repository they reported and whether they usually commit to big, medium or large projects (typical.size.of.project). To ensure that our answer set included developers that contribute primarily or exclusively through PRs rather than through other means (e.g., bug reports), we used one of the fixed answer questions (Q9: How do you contribute code to the project?) as a further demarcation point. Consequently, we filtered 77 respondents who did not indicate contributions exclusively through PRs or through branch-to-branch PRs. The final answer set contained 645 answers. 4. RESULTS This section presents the results of our exploratory investigation. When quoting survey respondents, we refer to them using a [rX] notation, where X is the respondent’s ID. Codes resulting from coding open-ended answers are underlined. When referring to quantitative results, we annotate the metrics presented in Section 3 with a sans-serif font. Where applicable, we integrate and compare our findings with related research findings. RQ1.1: What are contributors’ work practices? We wanted to understand which practices contributors use after they decide to create a PR, before and after the actual coding, but before they submit it. A variety of survey questions led to this understanding and the answers are presented in Figure 2. Work practices followed before coding. To ask respondents about their work habits before coding, we provided them with a set of 7 questions (based on our analysis of the literature and our vast GitHub experience) with a 4-level Likert scale. Only 24 (3%) respondents added information using the ‘other’ field, mostly providing clarifications. Results show that, in general, contributors reported to conduct all the mentioned activities (as one developer put it: “These are all reasonable things to do” [r490]). Nevertheless, the activities receive different emphasis. In particular, contributors reported practices mostly related to increasing their awareness (i.e., “an understanding of the activities of others, which provides a context for your own activity” [14]). They checked whether similar work had already been performed by consulting (in this order of frequency) the issue tracking system, previous PRs, project communication channels, and external branches/forks. In the ‘other’ field, respondents added that the sources are checked both to get inspiration from similar work and to ensure that the work is not going to be duplicated effort (e.g., “I always create a Bug in Bugzilla to track the work if there is no existing bug” [r51]). The top experienced contributors (metric: top.10.perc.contrib) said that they do not need to update awareness because they maintain mental models of the status of the project: “I almost always know what’s going on [in the issue tracker], on the mailing lists, in the PR queue, so I have an idea how relevant my PRs are long before I start working on them.” [r439] Work practices followed after coding. We asked developers to rate 5 common practices with a 4-level Likert scale. The ‘other’ field was filled in by only a few participants (1%), mostly to clarify their previous choices. When the coding is finished and contributors are ready to submit a pull request, most developers declared they do not recheck whether similar work has been accomplished in the meanwhile—a contrast to what they report to do before starting to code. The activities that developers described as the most frequent before submitting a PR are formatting the code according to the project’s guidelines and running tests against the completed code. Some respondents complained how there was a lack of support for these activities in the project (e.g., “I would run tests and format according to guidelines but there are no tests or guidelines on this project” [r5], “no tests available in this repo, but normally I would run tests” [r1]). RQ1.2: How do contributors assess their code? We sought to understand how contributors evaluate the quality of their code before submitting it as a PR. Examining the quality of What is evaluated. One of the top priorities for contributors when examining PR quality is compliance, which had many manifestations in our response set. The most common was compliance to project PR or coding guidelines (e.g., “By following the contribution guidelines for a PR of that repository.”) [r164]). Contributors also try to comply to de facto guidelines manifested in the original repository, mainly code formatting (e.g., [r105,841]) and design (e.g., [r252]). Another compliance form is adherence to standard practices. Contributors try to increase their chances of acceptance by following language code styles and design principles (e.g., “Following clean code principle and checking code style” [r143]). On a related note, contributors examine two technical quality aspects: code quality and commit quality. Code quality is usually assessed subjectively by examining factors such as readability, clarity, and whether the change is minimal (“the code [...] contains only the minimal amount of code change necessary to implement the feature” [r15]). Several contributors reported that they strive to make high-quality commits. For some developers, this means that commits are “atomic and [can be] merge[d] at time of sending” [r74], while others assume a more aesthetic view: “Are the commit messages clear and do the commits in the PR tell a story?” [r74]. In pull-based development, PRs are usually submitted to projects without prior planning. To increase the chances of acceptance, contributors eagerly examine the PR’s suitability by analyzing whether the PR fully addresses the issue it is trying to solve. The term “issue” is used in the broader sense of an existing problem the developers are addressing (e.g., [r27,538]), even though in some cases it is associated with existing bug tracker issues [r306]. Contributors strive for their PRs to be self-contained (e.g., “It should be focused on the feature to implement (or bug to fix). Nothing unrelated to the topic should be in there.”) [r52] and they also try to ensure that the documentation of both their code and the PR eases the comprehension of the PR and meets assumed project standards (thereby enhancing compliance). **RQ1.3: How are changes communicated?** In the question about contributors’ work habits before coding, we asked about how often they communicate the changes to the project core team. Most of the respondents (59%) reported they did not communicate with the core team or that they communicated with the team very occasionally (see third item from the bottom in the left-hand side of Figure [2]). We analyzed the communication behavior in relation to the size of the projects (metric: meanPRS reported by the respondents. Although we found a significant relationship (p < 0.001, assessed using the χ² with df = 6) between the size of the reported project and the reported frequency in communication, which goes in the direction of communicating slightly more when projects are larger, the strength of the relationship is very weak (Cramer’s V = 0.14). In a subsequent multiple choice question, we inquired about the communication means contributors use when they decide to communicate on a change. The summarized results are presented in Figure 4. Many respondents explained that they open an issue in the tracker or a new PR, or both. They use emails or more synchronous communication channels (e.g., IRC or instant messaging) less frequently. This is in line with the findings by Guzzi et al. [23], who observed a shift in OSS developers’ communication habits from traditional channels, such as mailing lists, toward more structured channels, such as issue trackers. Those that added information in the ‘other’ field (6%), mainly specified the communication channels they use. Many mentioned forums (e.g., “online forums for the project” [r499]), others IRC-like solutions (e.g., Gitter [r76,77,399]) or project management tools (e.g., “Project Management tool such as VersionOne” [r61]), and a few reported to use email-based communication (e.g., “Mail-listing is the way to go in the community and core team” [r286]). **RQ2: What are the challenges of contributing?** To find the pain points experienced when contributing through the pull-based model, we explicitly introduced a mandatory open-ended question in the survey and asked respondents to state the biggest challenge they faced when contributing PRs. We learned that challenges revolve around three main themes: challenges about writing the code for the contribution, challenges on the tools and model to be used for submitting the contribution, and challenges pertaining to social aspects. These themes are linked to the finer-grained challenges expressed in the answers. For example, a challenge that emerged is *project compliance*, which in some cases relates to the *code* theme (e.g., “using the project code style” [r60]), while in others it relates to the *social* theme (e.g., “Not knowing all the rules/process” [r62]). The results are summarized in Figure 5. From left to right, we first classify the answers on the contributor’s rank (i.e., whether it is in top 10% PR contributor or not), then we show the three main themes and how the answers flow into the specific challenges. The thickness of a line represents the number of responses. We expected the top contributors (metric: top.10perc.contrib) to be less affected by *tools and model* and *code* related challenges given their greater experience, but to our surprise, both types of contributors had a very similar distribution of challenges among the three main themes. The theme reported by the majority of the respondents to be the most challenging when contributing in a pull-based model is the *social* one, followed by code and *tools and model*, respectively. We discuss the results in this order. **Social aspects.** The social theme is connected to most of the reported challenges in different ways, with the most prominent being responsiveness. More than 15% of the survey participants mentioned that getting timely feedback, if any, for their pull requests is hard and usually related to people issues (e.g., “The owner of the repo doesn’t ever respond to the PR and leaves it hanging open forever” [r15], “[there are] projects with lots of open PRs and few actually being accepted” [r98]). This situation seems to generate frustration for the contributors and they start to lose interest in the project: “Malaise and abandonmment. Few things are more frustrating than opening a PR and having it go nowhere” [r698], “When contributing to less active projects, it can be really frustrating to have a PR sit untouched for months, since by the time the author gets back to it, I may have given up on it and no longer care” [r665]. Respondents specified that they would rather receive a clear reject than have no response to their PRs (“it’s annoying to go to the effort of making one and have it ignored... Rejected is better.” [r85]). Getting feedback on the quality of their work is deemed important for improved prediction of acceptance of future PRs. Not knowing whether a PR will be accepted poses difficulties and stress on contributors (e.g., “When my own code depends on a PR, and I don’t know if the PR will get accepted that causes uncertainty and stress.” [r618]). Poor, delayed, but also general communication was reported as another issue. Some contributors specified that they find it challenging to explain the rationale of their changes, which can af- --- We obtained similar results aggregating into two groups both size (i.e., ‘small’ and ‘medium’ vs. ‘large’) and communication frequencies (i.e., ‘never’ and ‘occasionally’ vs. ‘often’ and ‘always’). --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- fect whether their PRs are thoroughly investigated (“Sometimes it’s hard to explain the need for some changes. Some teams will immediately reject them without analyzing them properly.”) ([r491]) A few contributors (e.g., [r190,563]) reported a fear of rejection as they found it personally embarrassing when their work was judged to be inadequate by people they have no relationship with. This fear can be exacerbated by the various challenges in interacting with core team members that many respondents reported (e.g., “Fear of looking stupid. Fear of rude response.” [r190], “Discouraging project owners” [r228]). In particular, respondents described social challenges related to politics or how the project is governed (e.g., “Project owners who really don’t want contributions.” [r122], “Politics, or project owners not wanting a fix or change, or not actively maintaining it.” [r526]), egotism and general arrogance (e.g., “People tend to merge only PRs for issues THEY see as bug.” [r360], “Unconstructive/hostile maintainer attitude” [r536]), and handling divergent opinions (e.g., “getting all [...] to agree with a feature you propose in a PR.” [r251]). Furthermore, contributors reported that it is challenging to find enough time to work on the project as they wish (e.g., “Time to work on complicated issues despite working full time” [r461]) and to propose contributions that fit in the project’s big picture and make it grow instead of addressing their needs only (e.g., “Making sure it’s in the interest of the project and not just mine.”) ([r183]). Code aspects. The code theme also permeates a number of challenges. The most frequently reported one is understanding the code base of the project, including layout and architecture (e.g., “Read others code and get understanding of the project design” [r564]). This problem seems to be magnified by project size and a lack of documentation (e.g., “[there is] no guideline or documentation” [r223], “Missing knowledge about inner workings of a project [...] sometimes caused by missing documentation” [r561]). Contributors also find it difficult to assess their changes’ impact on the rest of the code base (impact analysis). Sometimes this is related to their limited understanding of the project (e.g., “Ensuring that my PR doesn’t have unintended side effects due to not being intimately familiar with the entire code base.”) ([r202]), and also to the social theme since awareness is not maintained by all contributors (e.g., “Because of the great complexity of our code, contributions by others that are not directly related to my work can nonetheless affect it, and our contributions are not necessarily synced or communicated.”) ([r229]). To tackle this, and avoid regression, contributors explained they would rely on testing, but a proper test suite is not always available and running, and developing tests is also a challenge (e.g., “if there isn’t a good testing infrastructure in place, then I’m not sure how to contribute tests related to a PRs” [r656]). Writing PRs with proper code quality is mentioned as the only issue by 13 developers. Another 7% reported that being compliant with the project style and guidelines is challenging. The project compliance on code regards style both at a low typographical level and at a higher design level; this challenge also highlights the difficulties in knowing the format for PRs, commit messages, etc. ([r277]). Some respondents explained that this challenge is due to tribal knowledge, i.e., information only known within the project team and not explicit to the outside world (e.g., [r1500,659]). Tools and model. Respondents reported challenges regarding the tools and model less frequently. Among those, the use of git and handling conflicts between branches are the most prominent ones, especially for seemingly less experienced developers (e.g., “Usage of git is not intuitive. Especially for me as [one] who does not contribute regularly, it is every time a challenge to [use it]” [r158], “when projects try to enforce workflow through branches, that is often confusing.”) ([r210]). Some respondents mentioned problems with the local infrastructure setup needed for development and testing. The few explicit answers about the pull-based development model mostly relate to its learning curve (e.g., [r572]). More respondents mentioned GitHub as a challenge, especially when it comes to having discussions within PRs, thus connecting it to the social theme (e.g., “The comments on a PR can get unwieldy quickly. Without threading it can be hard to follow a conversation” [r102], “effectively communicating with other users over github” [r329]). Reducing barriers for new contributors. We further investigated what respondents think projects should do to reduce barriers for new contributors. This was mapped in the survey to an open-ended question that we manually coded. The top 5 barriers that emerged account for more than 50% of the answers. The first barrier (specified by more than 20% of the contributors) deals with having good guidelines on getting started with the development and testing environment, on code style/formatting conventions, on the contribution process, and on communicating with project owners. The second, third, and fourth barriers follow with a very similar frequency (ca. 15%). For the second barrier, respondents explained that project members should have more empathy towards new contributors, providing encouragement, mentoring, fairness, and having an overall positive attitude (e.g., “Engage in positive, responsive discussion [...] Giving a positive first experience goes a long way.”) ([r202], “Maintain a “positive” culture; be friendly, polite etc” [r73]). The third barrier reiterates on the concept of responsiveness: respondents stated that improving it would remove a serious barrier for new contributors, especially as they need more feedback on their work (e.g., “respond to issues/PRs/list posts in a timely fashion. Even just acknowledging the issue and suggesting an attack plan is immensely helpful.”) ([r504]). The fourth barrier is about the need for a clear project roadmap and a comprehensive task list with open issues, including recommendations for newcomers (e.g., “They should mark the open issues with the level of difficulty, like these issues are easy and beginners can resolve them” [r27]). Finally, 12% of the respondents reported better code documentation as important to attract new contributors. 5. DISCUSSION We now discuss our main findings, contrasting them with findings about the integrators’ perspective. We suggest recommendations for practitioners and consider the implications for researchers. 5.1 Main findings Awareness. Contributor practices for increasing awareness in the pull-based development setting are not substantially different from practices in settings where no social features are available. Similar to participants in other studies of OSS contributors [46, 38, 44], PR contributors first attempt to build an understanding of the project’s current status by examining existing contributions, the project’s issue database, and any contribution guidelines. What was surprising was that they did not explicitly discuss the social features [11] of GitHub as a source of information as much as (we) expected. Actually, only one respondent specifically mentioned the use of a GitHub social feature to build awareness about a project (“I see changes in RSS channel” [r600]). This situation raises questions about whether social features are useful to contributors. They might, however, rely on subtle social signals that environments like GitHub provide, without realizing it. Moreover, while most contributors report that they use the issue tracker for finding similar issues or PRs, at the same time, many PRs are rejected because they are duplicate or superseded \cite{25}. Intuitively, since contributors report that they are aware of the project status before they start working on a contribution, one would expect that very few PRs would be rejected for such reasons. This indicates a critical step between contemplating a contribution and actually creating it, and it underlines the importance of improving how awareness about projects is created and maintained. We also noticed an interesting paradox. Contributors deem it important to spend time checking for existing work related to the PR, but once they start coding a PR, they rarely (if ever) communicate the intended changes to the core team. The paradox is that they report it is important to be aware of what is going on in the project, but they do not express the intention to personally invest their own time to increase the overall awareness in the project. Furthermore, the fact that contributors often prefer to use communication means other than PRs (see Figure \cite{4}), hinders awareness. Not only are multiple discussions spread across different PRs, but also communication becomes scattered over multiple channels, making it difficult, if not impossible, for new contributors to understand the rationale behind a change. **Transparency.** The pull-based development model, in conjunction with the social media functions offered by GitHub, makes contributions and their authors more prominent than in other contribution models. As Dabbish et al. put it: “[it makes] the work visible” \cite{12}. Indeed, various developers discuss whether GitHub profiles should be the new de facto CV for developers. In our survey, we included an additional question asking why participants contribute to the chosen project. We introduced a closed question with 7 non-mutually exclusive answer options (based on our analysis of the related literature) and an open text field to specify other reasons. While most answers validated well-known motivations for contributing to OSS (e.g., the main motivation (60% of the respondents) was “usage” of the project they contribute to), approximately 35% of the respondents explained that they contributed for reasons related to personal career development, while 23% of the respondents mentioned enrichment of their public profile/CV as a motivating factor (e.g., “Making contributions to [project] makes it easier for me to get new clients” \cite{r1121}). Lerner and Tirole formalized that contributions to OSS projects are also driven by a career concern incentive. This incentive increases in strength as the audience’s visibility into the performance increases \cite{23}. Some of our respondents also indicated that contributions to GitHub benefit their career growth (e.g., “My contribution to [projects] allowed me to obtain a job within my favorite subjects” \cite{r437}). Additionally, several contributors fear that rejection of their PRs may harm their reputation. If this fear of rejection is caused by transparency, as also suggested in the previous analysis by Dabbish et al. \cite{12}, we have additional quantitative evidence about its benefits but also about potential risks. **Responsiveness.** More than 100 participants complained about the poor responsiveness from integrators. Specifically, they reported that they were worried they would not get a response or that they would get it too late to be relevant. They also suggested that improving the speed of responding to a PR would be an effective way to reduce barriers for newcomers. This complements the findings reported by Zhou and Mockus on an analysis of the ecosystems of Mozilla and GNOME \cite{54}. Zhou and Mockus found that “low attention […] as evidenced by a too-rapid response to issue reports” reduces the chances of a newcomer becoming a long-term contributor. We note that our data is drawn from self-reported behavior and suggestions given by contributors, while the study of Zhou and Mockus was mostly performed on traces left on software repositories, and therefore, the different development settings may lead to different ways of tuning out unwanted contributions. **Asynchrony.** One of the distinguishing characteristics of the pull-based model is asynchrony among the production of a contribution, its evaluation, and its integration. Asynchrony is a pervasive concern for both contributors and integrators and its effects are usually detrimental. Asynchrony hinders the observability of the overall status of a project and burdens integrators and contributors with extra communication obligations. Recently, several high profile companies (e.g., Facebook \cite{22} and Google \cite{32}) have moved away from the pull-based model, while others use strictly bounded code review processes and branching strategies (e.g., Microsoft \cite{5} \cite{2}) to increase development speed for their internal repositories (however, they still use pull-based development for OSS projects). From a distributed systems theoretic standpoint, mitigating the results of asynchrony is impossible \cite{12}. Therefore, integrators and contributors should agree on minimal communication protocols that increase each other’s awareness and rendezvous points for mandatory information exchange. In certain cases (e.g., collocated development), projects should be prepared to abandon the pull-based model in favor of more direct feedback loops. 5.2 It takes two to tango The pull-based development mechanism, and its GitHub implementation in particular, aims to facilitate the information exchange between two interacting parties sharing a common goal, namely integrating a change into an existing code base. Due to the closeness and asynchrony of the interaction, it is expected that good or bad practices of one interacting part will reflect on the other. Comparing this research with our previous work \cite{27}, we found a number of technical and social pain points experienced by both integrators and contributors. We report on these below. **Quality.** Contribution quality is a major concern for contributors. It is one of the most frequently reported challenge items and also something they deeply care about before PR submission. Not surprisingly, quality is also a top priority for integrators. A cross examination of the factors that contributors and integrators examine in PRs reveals that there is also a high overlap in terms of compliance/conformance and code quality as top factors. Moreover, automated testing is used by both integrators and contributors as a commonly accepted way to ensure contribution quality. We hypothesize that this shared understanding of quality, and the ways of achieving it, is the result of widely accepted technical norms. Positively, this helps the majority of contributions to be accepted (85%), while rejections are usually not due to technical reasons \cite{25}. **Lack of process.** The pull-based model on GitHub lacks a specific patch acceptance process (as is the case with Gerrit \cite{40} or CodeFlow \cite{2}). Some integrators find the lack of a well-defined acceptance process (e.g. voting and sign-off) disturbing enough to move to other reviewing platforms. In addition, experienced contributors are used to searching for PR process documents, though needed to explore options concerning process policies. **Workload and responsiveness.** Integrators on large, active projects reported that they have problems handling and prioritizing the large number of PRs those projects attract; perhaps as a result, contributors complained about the lack of responsiveness. However, integrators also protested about the lack of responsiveness from con- Contributors when they request additional changes during a code review and complained about “hit-and-run” PRs. These concerns may be the result of the pull-based development model that simplifies experimentation with contributions to projects without reducing the reviewing burden on both parties. Communication tooling. A significant portion of both integrators and contributors find that the communication facilities afforded by the GitHub PR mechanism are lacking in terms of immediacy and structure. This hinders the effective discussion of high-level concerns (e.g., system design) and has a negative impact on the centralization of information about a contribution. Indeed, many contributors and integrators reported that they use external tools, mainly supporting synchronous communication (e.g., IRC or instant messaging), to exchange information. Two key features that are missing, as reported from our respondents, are threaded communications and voting mechanisms. Communication failures. Communicating about the rationale for PRs was reported as difficult not only by contributors, but also by integrators wishing to understand the reasons for a PR. Integrators complained that discussions on PRs diverge from technical content, while contributors expressed their concerns about having to cope with project politics in order to get their contributions accepted. Contribution rejection is a concern for both parties: integrators reported that it is not easy to explain the rational behind a rejection, while contributors stated that it is hard to accept the rejection. We conjecture that the above shared difficulties are the result of a communication process that, while open and accessible, is lacking in terms of immediacy and traceability. 5.3 Recommendations for practitioners We present a set of recommendations that can help streamline the experience for contributors when working with integrators in the pull-based model. For one-off contributions, these guidelines revolve around two basic principles: minimizing friction and maximizing awareness. For more long-term involvement in a project, it is crucial to build and maintain a contributor profile. Minimizing friction. Contributions that are small and isolated are easier for integrators to process. In previous work [51, 3, 25, 27], the size of the change was one of the most important factors related to acceptance. This is because the impact of the change is more easily evaluated, especially if the change does not cross logical functionality or design boundaries. The contributors should also make their changes adhere to guidelines and learn how to use the underlying tools (git), as this saves review time. Projects should provide a policy or comprehensive set of contribution guidelines. These guidelines should at least provide details about the expected code style, commit format, PR process, and available communication options. Well-thought-out guidelines will help developers format contributions using the expected style and can act as a reference in code review discussions. Moreover, projects should invest in good tests. Not only would contributors gain confidence about their contributions by testing them locally, but by doing so, integrators will evaluate them more quickly [25]. Automation is also important and it should at least cover the development environment setup. Ideally, the contributor should be able to set up a fully working development environment by running a simple command; existing tools allow this (e.g., Vagrant and Ansible). Projects should also invest time to set up automatic quality evaluation of incoming contributions. This can include code style compliance checks and perhaps more sophisticated static analysis tools. In the case of GitHub, external services are available to enable continuous integration (e.g., Travis) and code quality (e.g., Code Climate) monitoring on a per contribution basis. Maximizing awareness. Awareness can be increased by contacting the development team using real-time communication channels (e.g., IRC or its evolved counterpart GITTER, which is better integrated in GitHub) or by following the minimal PR idiom [7] (depending on project preferences). Integrators should be both proactive, by establishing (and perhaps even documenting) a professional communication etiquette, and reactive, by following discussions and intervening in cases where discussion diverges from the etiquette. Similarly, contributors should be available after a submission to promptly discuss the results of the code review and thus mitigate some of the negative effects of asynchrony. Long-term involvement. For contributors seeking long-term involvement in project communities, essential steps are profile building through a stream of excellent contributions and participation in other community activities (e.g., discussion of issues); integrators both evaluate [24, 27] and prioritize [27] work using a mixture of social signals and developer track records, whose visibility is ensured by the transparency of the model. 5.4 Implications for researchers Our work uncovers several future research directions. Work prioritization. Low responsiveness is one of the most recurring challenges experienced by contributors. Integrators also reported problems in prioritizing PRs [27]. Automating the prioritization of PRs could help integrators allocate their time more effectively but also show contributors the status of their PRs with respect to the overall queue. Automated prioritization could take advantage of explicit integrators’ preferences (e.g., place bug fixes first), thus making contributors aware of such choices in a potentially automatically generated guideline. Initial work in this direction has been carried out by van der Veen et al. [27]. Estimated time for merging. A widespread usability heuristic states that a “system should always keep users informed about what is going on” [27]. This is often achieved, for example, through progress bars in application UIs. Contributors’ frustration about not knowing the status and fate of their PRs indicates that having a capability for estimating the time for merging a contribution would be valuable. If the estimation engine could provide an indication on the most significant factors considered for the prediction, contributors could take advantage of this prediction to understand what could be improved to speed up acceptance (e.g., splitting a PR into self-contained tasks), which would also help contributors speed up their decision on whether to continue contributing to a particular project. Previous research has estimated merging time for patches [30], closing time for issue reports [19, 33], etc. In the context of PRs, Gousios et al. [25] developed a machine learning approach to predict a pull request’s merge window. In addition, Vasilescu et al. [45] developed models to determine PR processing time. This is a ripe opportunity for researchers to support a wide population of developers. Untangling code changes. Integrators reported that code understanding and reviewing is simplified if code changes pertain to a single, self-contained task [2, 27], however, contributors reported that creating them is a challenge. Recently, researchers have proposed automated approaches to split changes into self-contained tasks [29, 13]. It is an interesting opportunity to apply these methods and integrate them into the pull-based model workflow. Impact analysis on PRs. All contributors and integrators are interested in knowing the impact of the proposed PRs beyond the changed code. The pull-based development model is a fine opportunity to provide results of impact analysis research to a broad community and test its effects on the field. Tools’ results could be integrated in the PR interface as an optional service. Improved awareness and communication. Our respondents reported the need to build awareness before working on a new PR, but expressed little intent to communicate changes to the core project team before starting work. Understanding this phenomenon is an interesting avenue for further research on collaboration behavior in knowledge-intensive settings. Moreover, a number of drawbacks emerged in communication occurring within PRs, despite the advantage of being close to the changed code. Particularly, communication support should be improved for discussing high-level concerns and for scaling to longer discussions. Multidisciplinary studies involving user interface designers, communication experts, and software engineering can be designed and carried out to determine how to improve communication within PRs. 5.5 Design implications The pull request model offers a simple, yet solid basis for distributed collaboration. The lowered barrier to entry, transparency of social platforms, and integration of both analysis tools and reviewing mechanisms help projects expand their collaborator base seamlessly. Considering their continuous growth in popularity, it is reasonable to expect that pull requests will become the minimum unit of software change in most collaborative projects. Our current and previous findings hint at the design of features that will facilitate this transition. We imagine a contribution platform, which we call PR.next, that optimizes the contribution experience and helps integrators handle the reviewing load by means of intelligent algorithms, which we describe in the following. Initially, PR.next assists contributors in evaluating their contribution proposals against the state of the project. A contributor expresses the proposed change in natural language and the system searches i) the code base and ii) open or recently closed contributions for similar changes. Then, PR.next helps contributors format their contributions by running continuous integration and style checks in a private staging space before the change becomes public. In the mean time, PR.next compares the contributions to other in-flight contributions and warns about duplicates. PR.next also helps integrators prioritize their work. When a new contribution arrives, PR.next combines information from multiple analysis tools (e.g., continuous integration) to rank pull requests according to their readiness to be reviewed. The system gives integrators visual hints (e.g., predicted time to merge) and mines information, such as discussion status or voting results, from other in-flight contributions to help them evaluate priority. At the project level, PR.next supports community voting mechanisms to help projects evaluate contribution desirability, where votes are ranked for importance based on voter characteristics in the social (voter status in community) or dependency (voter’s project status within the software ecosystem) graph. 6. LIMITATIONS We designed our survey with the stated aim of gaining insights on a novel mechanism of collaboration in distributed software development. For closed selection questions in the survey, the response categories originated from our review of the literature and also from our prior experience working with and researching PRs and the pull-based model. The questions were phrased to avoid leading the respondent to a specific answer and were validated through (1) consultation with colleagues expert in qualitative research, (2) a formal pilot run, and (3) several mini-runs of the survey. Despite our best efforts, there could be several reasons why our study is limited. Internal validity – Credibility. We used coding to classify the contributors’ responses in open-ended questions. The coding process is known to lead to increased processing and categorization capacity at the loss of accuracy of the original response. To alleviate this issue while coding, we allowed more than one code to be assigned to the same answer. Question-order effect (e.g., one question could have provided context for the next one) may lead the respondents to a specific answer. One approach to mitigate this bias could have been to randomize the order of questions. In our case, we decided to order the questions based on the natural sequence of actions to help respondents recall and understand the context of the questions asked. Social desirability bias (i.e., a respondent’s possible tendency to appear in a positive light, such as by showing they are fair or rational) may have influenced the answers. To mitigate this issue, we informed participants that the responses would be anonymous and evaluated in a statistical form. Generalizability – Transferability. Our selection of projects and contributors to GitHub projects using the pull-based model may not be indicative of the average project. Previous work [25] found that the median number of PRs across repositories is 2; in our sample and considering the initial selection of projects, the smallest project had more than 400. We expect that if the study is repeated using random sampling for projects, the results may be different. This may affect the results on obstacles such as low responsiveness and inefficient communication, as average projects do not use PRs in a high capacity. To reduce other limitations to the generalizability of our work, we did not impose other restrictions on the sample projects, such as programming language or use of technologies. Moreover, GitHub is only one, albeit the biggest, of the social coding sites featuring the pull-based development model and it has specific social media features. While this model remains the same across all these sites and the social features are similar, the implementation of several GitHub features might influence the developer’s opinions of the model. In both our question set and our interpretation of the results, we avoided direct references to GitHub’s implementation of the mechanism. However, bias in the contributors’ answers could not be completely eradicated, as can be witnessed by the fact that many open-ended answers included direct references to GitHub or tools in its ecosystem (e.g., Travis CI). 7. CONCLUSIONS We presented our investigation of the pull-based development model as implemented in GitHub from the contributors’ perspective. Our goal was to gain knowledge on the work practices of pull request contributors and the challenges they face. We make the following key contributions: (1) A publicly available iteratively-tested survey with questions for eliciting contributors’ practices in the pull-based model, and the anonymized answers of 760 respondents. (2) The set of open-ended questions we coded manually, and the R analysis scripts for the overall data analysis. (3) A thorough analysis of the answers to our research questions on contributors’ work habits, PR preparation, and open challenges in contributing with the pull-based model. (4) A discussion comparing our findings with previous literature, recommendations for practitioners using the pull-based model, and data-derived directions for future research and design. Among our findings, we identified reducing response time, maintaining awareness, improving communication both in content and in form, and quality assessment as key components for supporting contributions in the pull-based model. We hope that our insights will lead to merging external contributions more effectively in practice and to devise improved tools, to support developers in both creating and handling code contributions more efficiently. 8. REFERENCES [35] N. McDonald and S. Goggins. Performance and participation in open source software on GitHub. In Extended Abstracts on
{"Source-Url": "https://pure.tudelft.nl/portal/files/9302656/icse2016b.pdf", "len_cl100k_base": 12302, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42263, "total-output-tokens": 16528, "length": "2e13", "weborganizer": {"__label__adult": 0.0003612041473388672, "__label__art_design": 0.00036787986755371094, "__label__crime_law": 0.0002605915069580078, "__label__education_jobs": 0.0024204254150390625, "__label__entertainment": 5.1975250244140625e-05, "__label__fashion_beauty": 0.00013959407806396484, "__label__finance_business": 0.00033164024353027344, "__label__food_dining": 0.00028252601623535156, "__label__games": 0.0004699230194091797, "__label__hardware": 0.0003676414489746094, "__label__health": 0.0003001689910888672, "__label__history": 0.0001908540725708008, "__label__home_hobbies": 9.226799011230467e-05, "__label__industrial": 0.0002218484878540039, "__label__literature": 0.00023925304412841797, "__label__politics": 0.0002257823944091797, "__label__religion": 0.0003662109375, "__label__science_tech": 0.0026721954345703125, "__label__social_life": 0.00015020370483398438, "__label__software": 0.0049591064453125, "__label__software_dev": 0.98486328125, "__label__sports_fitness": 0.0002589225769042969, "__label__transportation": 0.00035071372985839844, "__label__travel": 0.00017130374908447266}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 72795, 0.03716]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 72795, 0.1192]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 72795, 0.92639]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5405, false], [5405, 12875, null], [12875, 20099, null], [20099, 24250, null], [24250, 27216, null], [27216, 32133, null], [32133, 39832, null], [39832, 47641, null], [47641, 55482, null], [55482, 63075, null], [63075, 68861, null], [68861, 72795, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5405, true], [5405, 12875, null], [12875, 20099, null], [20099, 24250, null], [24250, 27216, null], [27216, 32133, null], [32133, 39832, null], [39832, 47641, null], [47641, 55482, null], [55482, 63075, null], [63075, 68861, null], [68861, 72795, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 72795, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 72795, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5405, 2], [5405, 12875, 3], [12875, 20099, 4], [20099, 24250, 5], [24250, 27216, 6], [27216, 32133, 7], [32133, 39832, 8], [39832, 47641, 9], [47641, 55482, 10], [55482, 63075, 11], [63075, 68861, 12], [68861, 72795, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 72795, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
61cad343d9053fc62847dfd95997fde71c485b8b
Formally validated specification of a micro-payment protocol Pierre Dargenton, Daniel Hirschkoff, Pierre Lescanne, E. Pommateau To cite this version: HAL Id: hal-02101908 https://hal-lara.archives-ouvertes.fr/hal-02101908 Submitted on 17 Apr 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Formally validated specification of a micro-payment protocol P. Dargenton D. Hirschkoff P. Lescanne É. Pommateau August 2001 Research Report N° 2001-31 Formally validated specification of a micro-payment protocol² P. Dargenton D. Hirschkoff P. Lescanne É. Pommanteau August 2001 Abstract In this paper, we develop a formal specification for a micro-payment protocol, first on paper, then within the Coq proof assistant. Our approach in defining a notion of execution traces for protocol runs is inspired by previous works, notably by L. Paulson (in the Isabelle/HOL system). However, we show that the protocol we study entails some modifications to Paulson’s framework, related to the modeling of the agents’ internal state. We moreover take profit from Coq’s expressive meta-language to mechanically derive proofs about the formalisation itself, by introducing a notion of well-formedness for protocol rules. Keywords: electronic commerce, micro-payment protocols, specification, formal proof Résumé Cet article présente la spécification formelle d’un protocole de micro-paiement, d’abord par une définition sur papier, puis par une formalisation dans l’assistant à la preuve Coq. Nous nous inspirons d’une méthode employée principalement par L. Paulson afin d’introduire une notion de trace pour les exécutions du protocole que nous étudions. Néanmoins, le traitement du protocole en question rend nécessaires quelques modifications à l’approche de Paulson, en rapport avec la modélisation de l’état interne des agents. Nous exploitons le cadre formel fourni par Coq pour valider la spécification qui est faite en prouvant des propriétés de la spécification proposée, propriétés qui s’expriment à travers une notion de bonne formation des règles définissant les étapes du protocole. Mots-clés: protocole de commerce électronique, micro-paiement, spécification, preuve formelle 1 Introduction The formal analysis of cryptographic protocols has grown into an important research thread. Various methods have been proposed to check mechanically various kinds of properties of various kinds of protocols. Recently, the development of e-commerce has further encouraged the study of formal methods for security, either through the adaptation of already existing approaches or through the design of new techniques. The work at hand is the result of a collaboration between an academical laboratory and a startup, called NT Sys, interested in developing e-commerce technology. We focus more precisely in this paper on a micro-payment protocol, referred to as the “light signatures protocol”. Micro-payment protocols are used in situations where a client is liable to buy a big amount of goods, each of these having a small cost. Typically, one can think of an Internet service, the content seller, providing information in such a way that each mouse click in a certain set of web pages has a (small) price. Traditional, so-called “heavy” cryptography (e.g. using RSA or elliptic curves) cannot be used for each such a micro-contract, as it would dramatically slow down the transactions. Alternative solutions have then to be found, based on a trade-off between efficiency of the protocol and security matters, which are of course always crucial in the context of electronic commerce. The light signatures protocol\(^3\) has been introduced to address this question. The basic ideas of the protocol have been presented in [Opp01, PBM01], together with a rather informal description of its behaviour. The general micro-payment system is structured in such a way that a confidence party, whose role is to arbitrate transactions between several content sellers and Internet end-users, aggregates the micro-transactions and monitors the contracts. In this infrastructure, the light signatures protocol is a client/server security protocol intended to be run between a content seller and the confidence party. Its goal is to provide authentication for the online control and registration of the transactions. The formal definition of the light signatures protocol has been developed progressively, towards and increasing level of precision in the description of the protocol steps. What we present here is the result of this process: we describe the formal specification of the protocol, and establish results about this specification, but we do not address the security properties satisfied by the protocol itself. This is left for future work. It has actually turned out that, already in setting up a formal analysis of the original version of the protocol, many security issues could be raised, and small mistakes or imprecisions could be fixed. In this paper, we describe the resulting protocol in several steps, from a rather informal account of the interactions between parties to a more mathematical presentation of the protocol, and finally to a mechanical definition. For this last step, we have used the Coq proof assistant [Bar01], which is an interactive theorem prover based on the Calculus of (Co)Inductive Constructions. In moving towards mechanical verification like is the case in this work, we isolate some difficult points in the protocol that we want to understand and --- \(^3\) The use of light signatures in micro-payment systems is patented by NT Sys; light signatures are based on an original idea by Jacky Montiel. validate, and leave aside some others (either by keeping them outside the representation framework or by “underspecifying” them). This is frequently the case when performing an analysis like ours. What is important, though, is to keep track of the simplifying assumptions that are made, so as to be aware of “what is actually being proved”, and, in a further step, to characterise the attacks against which the protocol has been proved to be robust. It may also be the case that the specification process itself introduces some unexpected behaviour of the protocol: for example, a given simplification in the protocol can have the effect of allowing more interactions between agents than were actually possible in the original, more intricate, version. One is therefore interested in reasoning about the specification under construction, to establish properties that are more related to the specification itself than to the system being specified. Another motivation for this kind of reasoning comes from our experience in designing the description of the protocol: as the presentation moved towards a very formal account, some of the people involved in this work, who were less skilled in formal methods, wanted to be sure that we were “still talking about the same thing”. If we have a way to prove some properties of the specification saying that the formal entities that we manipulate actually do what they are supposed to do, we can increase the confidence in the adequacy of the specification under construction. We have developed this kind of reasoning within our Coq specification of the light signatures protocol, in order to prove mechanically that the rules of the protocol as they are stated in Coq satisfy a kind of well formedness (a notion that we define below). Contributions The contributions of this work are twofold. First, we present the formal specification of a new micro-payment protocol, described as a set of traces. Second, we propose a novel approach for the mechanical reasoning about the specification itself, that can be combined (in the Coq system) with Paulson’s inductive approach for the study of the traces and of possible interferences with evil agents. Related works Paulson’s work [Pau97, Pau98, Pau99] has been a major influence in introducing a notion of traces for our protocol. However, we use Coq instead of Isabelle/HOL [Pau94b, Pau93], which will have some consequences on our formalisation, as will be seen in Sec. 4. It has to be noted that the work of Bolignano [Bol96] has led to the definition of a theorem proving framework for the verification of cryptographic protocols in Coq. Our approach however is closer to Paulson’s in the way we formalise traces. Outside the theorem proving community, different techniques have been proposed for the formal study of cryptographic protocols. We can mention in particular approaches based on model checking [Low97, Mea96], on process algebra descriptions of protocols [AL00, AF01, AG99, Bor01], as well as term-rewriting techniques [JRV00]. Outline of the paper The paper is structured towards an increasingly formal understanding of the micro-payment protocol we study. We first introduce its principles in Section 2. In Section 3, we turn to a more mathematical presentation, by defining a notion traces generated by protocol runs. We then turn to the Coq mechanisation, by describing the specification of the protocol in Section 4 and the proofs about our specification in Section 5. In Section 6, we conclude and comment on some important originalities of the approach we have followed, especially in the methodology we use for the formalisation of the micro-payment protocol. Parts of this work have been presented in [Opp01b]. The Coq development corresponding to the formalisation discussed in Sections 4 and 5 is available at http://www.ens-lyon.fr/~hirschko/oppidum 2 The Micro-Payment Protocol: Informal Description In this section, we present the light signatures protocol, as introduced in [PBM01], and discuss the simplifying assumptions that we make for the sake of our formal study. 2.1 Preliminaries The light signatures protocol specifies the transactions between a client and a confidence party that plays the role of a server. Its general principle is as follows: first the client generates a seed α, and sends α to the server. Heavy cryptography (typically, RSA) is used for this transaction. Once both agents know the seed, they can prepare for the protocol by computing a sequence of 2N nonces, where the integer N is a bound on the number of transactions between client and server in the current session. The nonces are computed by applying a one-way hash function H to α, generating the sequence \[ \alpha, H(\alpha), H^2(\alpha), H^3(\alpha), \ldots \] Thus the ith nonce \( N_i \) is equal to \( H^{i-1}(\alpha) \). **Remark 1** \( H \) being a hash function, it is easy to compute \( N_{i+1} \) from \( N_i \), and we suppose it is much more difficult the other way around. This is the base idea which is used in the transactions that take place once initialisation is done and both agents know the \( N_i \)s. Once the generation of the nonces sequence is completed on both sides, the agents start exchanging messages of the following form: \[ \langle Ag, k, \text{Sign}(C, k), C \rangle, \] where \( Ag \) is the identifier of the agent (client or server), \( C \) is the content being transmitted (a query of the client or an acknowledgment of the server), \( k \) is an integer computed from the agent’s current session index (to be described below), and \( \text{Sign}(\cdot, \cdot) \) is a signature function that is used to authenticate the message. We set \[ \text{Sign}(C, k) \overset{def}{=} H'(C, N_{2N-1-k}), \] where \( H' \) is a hash function (possibly taken equal to \( H \)). Along the execution of the protocol, each agent maintains an internal index that is incremented after each query/answer; this index allows both parties to synchronise and to detect strange behaviours in transactions. By definition, the client sends messages of the form \(\langle Clt, 2 * Ind_{CB}, \text{Sign}(Q_{Ind_{CB}}, Ind_{CB}), Q_{Ind_{CB}}\rangle\), where \(Ind_{CB}\) is the current value of the client’s index. Symmetrically, the server answers using nonces corresponding to odd integers computed from its current internal index \(Ind_{Srv}\), and sending acknowledgments of the form \(A_{Ind_{CB}}\). It can be noted that by definition of the signature function, the nonces will be used in reverse order, according to Remark 1 above. 2.2 Description of the Protocol Let us now describe the protocol in a more formal way. We adopt the usual notation of the form \(A \to B : M\) to say that at a given step of the protocol, agent \(A\) sends message \(M\) to agent \(B\). - **Initialisation and first message** the client computes a fresh seed \(\alpha\) and sends it using public key cryptography to the server: \[ (Clt_0) \quad Clt \to Srv : \langle Clt, 0, \text{Sign}(\{\alpha, Q_0\}_{S_{CB}}, P_{Srv}, 0), Q_0\rangle. \] \(S_{CB}\) and \(P_{Srv}\) are respectively the client’s private key and the server’s public key; private/public keys of every agent are supposed to be the inverse of each other in the encryption/decryption process. Once both agents know the seed \(\alpha\), they compute the sequence of nonces \(N_i\)s and store it locally (in a protected area). Note that the very first message of the protocol has somehow special form, because, together with the first request \(Q_0\), the client also has to send the seed \(\alpha\). Therefore, public key cryptography is used to encrypt the triple \(\{\alpha, Q_0\}\), using the client’s shared key \(S_{CB}\) and the server’s private key \(P_{Srv}\). A different initialisation step is actually described in [PBM01], where the first message is \(\langle Clt, 0, \{\alpha\}_{S_{CB}}, \text{Sign}(Q_0, 0), Q_0\rangle\). The modification we have brought here is mainly due to technical reasons, in order for all messages to be 4-uples, which will simplify the formal study. However, there does not seem to be any reason to fear a security loss in encrypting the seed together with \(Q_0\) instead of treating both contents separately as is done in [PBM01]. - **Client’s requests** the requests of the client—represented as the \(Q_i\)s—are sent in messages corresponding to even indices of the nonces: \[ (Clt_1) \quad Clt \to Srv : \langle Clt, 2 * Ind_{CB}, \text{Sign}(Q_{Ind_{CB}}, 2 * Ind_{CB}), Q_{Ind_{CB}}\rangle. \] - **Server’s responses** symmetrically, answers of the server—the \(A_i\)s—are sent with odd nonce indices: \[ (Srv_1) \quad Srv \to Clt : \langle Srv, 2 * Ind_{Srv} + 1, \text{Sign}(A_{Ind_{Srv}}, 2 * Ind_{Srv} + 1), A_{Ind_{Srv}}\rangle. \] The last query/answer exchange in a normal run of the protocol happens when the \(Clt_1\) and \(Srv_1\) are used with \(Ind = N - 1\), and has the effect of concluding the session. - **Desynchronisation on the client’s side** rule \(Clt_1\) above specifies the emission of client’s requests during a normal execution of the protocol. In particular, before sending the \((i + 1)\)th request, the client makes sure that the last message received from the server has been generated with the right index (i.e. \(i\)). Let us now examine the case where the index \(Ind_{Srv}\) in the last message received by the client does not correspond to its “view” of the protocol, as defined by index \(Ind_{CB}\). There are two cases: either the last message from the client is such that Ind_{SRV} < \text{Ind}_{CHU}: this probably corresponds to an old message from the server, that has been accidentally resent to the client. In that case, the client simply ignores it (or may decide to send a warming message to the client, in an enriched version of this protocol). If \text{Ind}_{SRV} > \text{Ind}_{CHU} (and if the message from the server has the right shape), the situation is much more suspect: in some way, either an evil agent has managed to construct messages to be sent “in the future”, or for some reason the server erroneously believes that more transactions have taken place than is actually the case. The best thing to do here is to abort the protocol session; this is done by sending a message built with the last client integer (namely \(2N\)) and pointing the fact that an error has occured: \[(\text{Cl}t_2)\quad \text{Cl}t \rightarrow \text{Sr}v : (\text{Cl}t, 2N - 2, \text{Sign}(\text{Error}, 2N - 2), \text{Error}).\] \text{Error} is a special message to indicate a misbehaviour. - \textbf{Desynchronisation on the server's side} A similar situation may occur on the server's side, with a received client index \text{Ind}_{CHU} that does not correspond to the server's internal index \text{Ind}_{SRV}. If \text{Ind}_{CHU} < \text{Ind}_{SRV}, this is probably an old message, ignore it. If \text{Ind}_{CHU} > \text{Ind}_{SRV}, then it may be the case that some messages from the client were lost; the server resynchronises with the client by setting its current session index to \text{Ind}_{CHU} (and may emit a warming message). \[(\text{Sr}v_2)\quad \text{Sr}v \rightarrow \text{Cl}t : (\text{Sr}v, 2 \times \text{Ind}_{CHU} + 1, \text{Sign}(\text{R}, 2 \times \text{Ind}_{CHU} + 1), A_{\text{Ind}_{SRV}}),\] Note that at every step, a transaction has to be initiated by the client and acknowledged by the server: this is the reason why the reaction of the agents differs whenever they discover that the other agent seems to be “ahead of time” in the protocol. - \textbf{Time outs} If the client fails to receive an acknowledgment from the server for a given amount of time (to be fixed in practice), he generates a time out, and sends his last message to the server again. After a given amount of time outs, the client decides to abort the protocol session, by sending the same message as in rule \((\text{Cl}t_2)\) above. Therefore, in addition to his current session index, the client also has to keep track of the number of time outs that have been occurred in order to be able to decide to abort the transactions. On the server side, the time out mechanism is somehow simpler: after receiving no query from the client for a certain amount of time, the last nonce is sent along with an error message to inform the client that the protocol is aborted. This corresponds to the following step: \[\text{Sr}v \rightarrow \text{Cl}t : (\text{Sr}v, 2N - 1, \text{Sign}(\text{Err}, 2N - 1), \text{Err}).\] \subsection*{2.3 Discussion – Simplifications Made for the Formal Study} We have given a first description of the light signatures protocol, which is rather close to the presentation of [PBM01]. With respect to that work, we have already introduced a few small refinements in some protocol steps, but many aspects are still unprecise. For example, it is not clear when agents index modifications have to occur, or how the session starts on the server’s side. One of the contributions of the formal description we shall present in the next section is to give a more detailed treatment of these issues. We first discuss here some of the aspects of the light signatures protocol that are not handled in our formal study. As said before, it is important to be aware of the simplifications we make for the sake of our analysis, in order to understand the limitations of the specification. The first issue that we do not address here is time. The overall duration of the a session of the protocol, as well as the value of time outs on the client’s and the server’s sides, are to be fixed by practical experiments on an actual implementation of the protocol. Note however that the shape of the transition rules we shall present in the next section determines (temporal) causality relations between message emissions. We do not describe either the mechanisms that are used to take into account several contemporaneous sessions, though the handling of freshness of the seed \( \alpha \). Such issues are rather common in cryptographic protocols, and we believe that traditional techniques can be used for this purpose. Finally, as said above, we do not handle warning messages in the current description of the protocol. These could be included in a future, more detailed, specification of the light signatures. 3 A Notion of Trace In this section, following Paulson’s approach, we inductively define a set of traces generated by possible protocol runs. Informally, the idea is to represent the traffic on the network due to the protocol, and describe the messages that can be sent by the various agents, these messages being possibly intercepted by an evil agent. Agents which obey the protocol react to the presence of messages of a certain form on the network by emitting new messages, and so on until the protocol completes. Of course, at any point in the execution, some agent watching the traffic on the network can choose to pick a message and try to decode it, or to resend either previously emitted messages or newly generated ones. This is captured by the Spy’s behaviour in Paulson’s framework [Pau98]. In the present study, we do not represent the spy and its possible attacks yet, but rather focus on the construction of protocol traces. The formalisation we obtain can be enriched following the lines in [Pau98] to handle attacks. According to this approach, a typical rule for the inductive construction of protocol traces would state something like "if a message of shape \( M \) is present on the network, then add a message \( M' \) to the current traffic." this behaviour corresponding to some step in the protocol where an agent \( B \) replies to \( A \)’s message \( M \) by sending message \( M' \). We shall see below how this kind of construction can be defined in a formal way. 3.1 Internal State of Agents An important specificity of the protocol we study is that both the client’s and the server’s behaviours depend on an internal representation of the current status of the protocol session (session index and number of time outs for the client, session index for the server). This seems to be in contrast with the methodology described above, where the traces are generated by adopting a global point of view on message traffic. Indeed, we have modified the framework to take into account a global notion of state, to represent the information maintained by each agent. The informal description of each protocol step given above is then refined into something like "if a message of shape M is present on the network and the current state is E, then set state to be E' and emit message M'." Of course, this way of presenting the protocol rules is much too general, for at least two reasons. First, the message M' that will be emitted will clearly depend most of the time on the shape of M. Second, the possible modification to the global state will heavily rely on the agent which is supposed to react at this step of the protocol, and in particular, we wish to be sure that the agent may modify the values of "its" components of state, leaving e.g. the index of the other agent unchanged. It turns out indeed that adopting a very general approach like we do here simplifies the task of the formalisation: trying to separate the global structure representing the state of each agent into several pieces, to take into account the private character of state, would probably lead to more complex notations and representation mechanisms. On the other hand, one has to make sure that the flexibility we gain is not misused, for example by allowing an agent to modify an other agent's private values: this is the issue we shall address in Section 5. 3.2 Generating Traces Let us now move to the formal definition of traces. Every step of the protocol (except initialisation, see below) is described by a rule of the form \[ \begin{align*} E & \& M \\ \hline \to & \ M'; E' \end{align*} \] This judgment means that whenever the system is in state E and M belongs to the current trace, first message M' is emitted (added to the trace), then the system evolves to state E'. It is important to notice that state modification takes place after emission of the message, so that one can tell the value of indices possibly used in the content of the message. The rules that define the traces generated by protocol runs are given on Figure 1. They correspond to a precise formalisation of the description given in Sec. 2. The correspondence is rather easy for rules c1, c2, s1 and \(s_2\), which embed rules Clt1, Clt2, Srvt1 and Srvt2 respectively. Note however that in this presentation the update of internal indices has been made explicit. The rules for the initialisation of the protocol on both sides, namely \(c_0\) and \(s_0\), specify how the agents start a new session and set their indices to 0. In particular, rule \(c_0\) has no premise, as the client can decide at any time to start a protocol session. With respect to the presentation of [PRM01], we have added rule \(s_0\) to model the protocol initialisation on the server's side: this allows us to state explicitly at which point the session index of the server is set to 0, and to make precise the freshness condition on the seed \(\alpha\). Indeed, without this freshness condition, any evil agent could keep resending the client's first message, which would have the effect of restarting the protocol on the server's side, thus bringing a form of denial of service. client \[ c_0 \quad \leadsto \quad \{\text{Clt, 0, \text{Sign}(\aleph_0, 0)}\} \cup \{0, 0\} \\ \text{where } \aleph = \{\{\alpha, Q_0\}_S\}_P \\ \frac{}{c < N} \] \[ c_1 \quad \leadsto \quad \{\text{Clt, 2(c + 1), Sign}(Q_{c+1}, 2(c + 1)), Q_{c+1}\} \cup \{0 + 1\} \\ \frac{}{c < s - 1} \] server \[ s_0 \quad \leadsto \quad \{\text{Clt, 0, } \alpha, \text{Sign}(\aleph, 0)}\} \\ \text{where } \alpha = \{\{\alpha, Q_0\}_S\}_P \\ \frac{}{\alpha \text{ fresh}} \] \[ s_1 \quad \leadsto \quad \{\text{Sign}(A_0, 1), A_0\} \cup \{0\} \\ \frac{}{c = s} \] \[ s_2 \quad \leadsto \quad \{\text{Sign}(A_s, 2s + 1), A_s\} \cup \{s + 1\} \\ \frac{}{c > s} \] handling time outs \[ \tau_{c_1} \quad \leadsto \quad \{\text{Clt, 2c, Sign}(Q_c, 2c), Q_c\} \cup \{c, \tau_{c + 1}\} \\ \text{where } \tau_{c_1} < \tau_{c_{\text{max}}} \] \[ \tau_{c_2} \quad \leadsto \quad \{\text{Clt, 2N - 2, Sign}(Err, 2N - 2), Err\} \cup \{0\} \\ \text{where } \tau_{c_2} \geq \tau_{c_{\text{max}}} \] \[ \tau_s \quad \leadsto \quad \{\text{Sign}(A_s, 2c + 1), A_s\} \cup \{s + 1\} \\ \frac{}{c > s} \] Figure 1: Definition of traces The rules for the description of time outs are also new with respect to [PBM01]: y they allow us to take time outs into account in our description of the protocol without explicitly giving a notion of time along protocol runs. Indeed, we in- stead introduce a form of non-determinism in the description of the protocol, which makes it possible to model any possible run of the protocol (possibly in- terleaved with server or client time-outs), thus freeing us from the necessity of handling a notion of time. Rules $\tau_{o,1}$ and $\tau_{o,2}$ model time outs on the client's side, while rule $\tau_{o}$ defines the server's time outs. 4 The Protocol in Coq 4.1 The Coq Proof Assistant For the implementation of our specification, we have adopted the Coq proof assistant [Bar01]. This system is an theorem prover based on the Calculus of (Com)Inductive Constructions, that allows the user to define theories and inter- actively build proofs about the objects that have been introduced. One of the main original features of Coq with respect to Isabelle is the presence of depen- dent types [CH88], which we exploit in the present work. Indeed, while on one hand one could that Isabelle/HOL provides a more powerful language of tactics, Coq's metalanguage comes with greater expressiveness, which shall be exploited below to reason on our formalisation within the prover. Coq's syntax is somehow reminiscent of a programming language à la ML, with the following notations and definitions: - We shall work with Coq's Set and Prop kinds. While Prop is the kind for (non-informative) proof objects, Set is used to build constructions whose structure can be analysed. This will be useful below. - Product types (resp. abstractions) are written $(x:T)T'$ (resp. $(\lambda x:T)\mathcal{T}'$). When $T'$ expresses a property or a logical statement, $(x:T)T'$ may be read "for all $x$ of type $T$, $T'$ holds". - Inductive types are declared using the Inductive keyword, by providing the type of each constructor. After such a definition, the corresponding case-analysis and elimination principles are automatically computed by Coq. These informal explanations should be enough to follow the code excerpts that are provided in the following paragraphs. The reader interested in more details should refer to the documentation of the system [Bar01]. 4.2 Representing the Entities Involved in the Protocol Figure 2 presents the Coq code for the specification of the data structures we need to specify the micro-payment protocol. Let us paraphrase these definitions: - We represent two agents, the client and the server (according to Paulson’s methodology, the spy shall come into the play afterwards). - Message bodies can either consist in client’s queries ($Q_1, Q_2, \ldots$), server’s answers ($A_1, A_2, \ldots$), or error messages. Parameter seed : Set. Inductive content : Set := A : nat -> content | Q : nat -> content | Err : content | Seed : seed -> content -> content. Inductive signed : Set := Sign : content -> nat -> signed. Inductive message : Set := msg : agent -> nat -> signed -> content -> message. Inductive state : Set := (* an axiomatisation of a set constructor *) Parameter set : Set -> Set. Parameter in_set : (S:Set)(set S) -> S -> Prop. Parameter add_set : (S:Set)S -> (set S) -> (set S). Parameter empty_set : (S:Set)(set S). Definition mk_set := [S:Set][x:S] (add_set S x (empty_set S)). Definition incl_set := [S:Set][s1,s2: (set S)] (x:S)(in_set S s1 x) -> (in_set S s2 x). (* a very reasonable property relating in_set and add_set *) Parameter in_add : (S:Set)(s:(set S))(x:S)(in_set S (add_set S x s)) x). Definition trace := (set message). Figure 2: Coq definitions for the entities involved in the protocol - The specification of the signature function boils down to a tuple definition. Its properties can be postulated independently. - Current state is given by a quadruple of integers. - The definition of the trace relies on a straightforward axiomatisation of a set constructor enjoying elementary properties. This part of the specification is left unspecified for the moment, as it does not influence the results we are interested in here. 4.3 Implementing Traces The rules given at Figure 1 can be translated rather directly into a Coq definition of an inductive object \( \text{micro} \) of type \( \text{state} \rightarrow \text{t:trace} \rightarrow \text{Set} \). Intuitively, a term of type \( \text{micro } e \ t \) means that there exists an execution of the protocol leading to state \( e \) and generating trace \( t \). Here is an example, giving the type associated to the constructor \( \text{c1} \) of the inductive type \( \text{micro} \): \[ \text{c1 : (e:state)(t:trace)(micro } e \ t \rightarrow (c:nat) \] \[ \begin{align*} \text{(in_set } \text{message } t \text{ (msg Serv } (S (\text{mult } 2) c))} \\ \text{(Sign } (\text{c } c)) \text{ (msg Clt } (S (\text{mult } 2) c)) \\ \text{(/\ (state } c \ e) \ e) \\ \text{(micro } (\text{inc } c \ e) \ (\text{add_set } \text{message})} \\ \text{(msg Clt } (S (\text{mult } 2) c)) \\ \text{(Sign } (\text{c } c) \text{ (msg Clt } (S (\text{mult } 2) c)) (S (S c)) (S e) \\ \text{)} \\ \text{t))} \] As we can see, modulo some syntactical conversions (\text{mult} is Coq’s multiplication on natural numbers, \text{S} is the successor function, \text{/\} is conjunction, and \text{state } c \text{ and inc } c \text{ are obvious functions to manipulate state}), we basically recognise rule \( \text{c1} \) from Figure 1. Keeping rules close to their formulation on paper is good practice because it gives confidence that no hidden anomaly or extra hypothesis is added in the process of porting the specification into the machine. Of course, this is possible in our case because the rules we use to describe traces are already very formal. Such rules were obviously not the kind of tools we have been using at early stages of the protocol formalisation. It is useful to go through a step where we have a definition as precise as possible on paper before going on the machine, in order for the person(s) in charge of the mechanical specification to take as few unnoticed design decisions as possible as far as the protocol is concerned (we indeed proceeded this way in the present work). \textbf{Limitations of the specification} For the moment, we have not taken into account the freshness condition on the seed \( \alpha \), neither have we formalised rules \( r_{0,1} \), \( r_{0,2} \), and \( r_{0} \) in Coq. Defining the properties of \( \alpha \) should be done at some point when verifying the protocol, exactly like specifying the behaviour of the signature function \( \text{Sign} \). As we have said above, this is not our purpose yet, as we are rather interested in examining the kind of traces that are generated when executing our micro-payment protocol. We believe though that providing a notion of freshness for seeds within our specification should not be a problematic issue. As far as leaving rules for the management of time outs outside our specification, we think that it can be good practice to first study the protocol without time outs, to check its behaviour in "normal conditions", and then add time outs into the game. Moreover, the specification of the protocol we work with is smaller without time outs, which is helpful in terms of clarity of our study. Here again, we do not think that formalising the extra rules to handle time outs would cause any difficulty in the framework we are presenting. 5 Reasoning about the Encoding of the Protocol 5.1 State Integrity As has been noted in paragraph 3.1, the framework we use to formalise the protocol rules and their effect on internal states of agents is rather general, and may a priori make it possible to represent quite weird behaviours. One problem is due to the notion of global state where every information about every agent’s state can be represented in formulating a rule. Well-behaved protocols nevertheless should be designed in such a way that every agent only has the possibility to modify its own component of the global state (at least one could also think of situations where the internal state of other agents should not be accessible, not only in writing, but also in reading). We refer to this property as state integrity, to express the fact that the protocol rules indeed respect this assumption. In the following, we define a framework to capture within Coq a notion of well-formedness for protocol rules, which basically says that the interaction described by a given rule respects state integrity. 5.2 Well-Formedness of the Protocol Specification In order to reason on the evolutions of global state, we define a function extracting from a trace object a pair of successive states during the execution of the protocol. This function, called next_state, is of type \[ \text{state} \rightarrow (e: \text{state}) (t: \text{trace}) (\text{micro } e \ t) \rightarrow \text{Prop}. \] It is defined by case analysis on the structure of its fourth argument, namely an hypothesis of type (\text{micro } e \ t). Using implicit arguments (a mechanism available in Coq to simplify notations by allowing the user to omit redundant parameters in function calls), we can reason on terms of type (next_state e0 H) (H being of type (\text{micro } e \ t)); the existence of such a term represents the fact that the protocol makes state e0 evolve into e. Analogously, we can introduce a function added_message, extracting from an hypothesis H of type (\text{micro } e \ t) the last message added in the execution of the protocol so far. Functions next_state and added_message are then used to state and prove the following theorem, establishing well-formedness of the constructors of the inductive type micro: \[ \text{Theorem wf_rules : (e,e': \text{state}) (t: \text{trace})} \] \[ (H: (\text{micro } e' \ t)) (\text{next_state } e \ H) \] \[ \rightarrow \] \[ (* \text{ either the client is sending, and } s \text{ is invariant ...} *) \] \[ (* \text{ ... or the server is sending, and } c \text{ is invariant. } *) \] \[ (* \text{ state } s \text{ e } (\text{state } s \text{ e'} \ t) / (\text{msg_sender } (\text{added_message } H)) \text{-Clt } \text{ /} \] \[ (* \text{ state } c \text{ e } (\text{state } c \text{ e'} \ t) / (\text{msg_sender } (\text{added_message } H)) \text{-SrV } \text{.)} \] 12 Theorem \textit{wfrules} expresses the fact that in any step in the execution of the protocol, only the internal state of the sender of the last message being emitted may have changed in the last protocol step. In other words, this means that no agent can modify other agents' state. Once the functions for the analysis of protocol traces have been defined, this result is easily proved by case analysis on the shape of the object of type \textit{(micro e' t)}. 5.3 Exploiting Dependent Types to Derive Proofs about the Specification In the paragraphs above, we have been able to state and prove some properties about objects of type \textit{micro within} the Coq system. To achieve this, we have defined ways to analyse the structure of an hypothesis \textit{H: (micro e t)} in order to extract the information we needed for our proofs. This has been possible in a rather natural way by exploiting Coq's setting where proofs are terms (and propositions are types), these terms possibly having dependent types (like \textit{(micro e t)}). These features are specific to Coq w.r.t. Isabelle/HOL, so we believe that the proofs we have derived could not be directly adapted to the setting of Paulson. \textbf{Remark 2 (About Set and Prop)} We have defined \textit{micro} as a predicate having values in \textit{Set} because we were interested in analysing the structure of a trace. Indeed, Coq's distinction between kinds \textit{Prop} and \textit{Set} introduces a form of asymmetry between types carrying a constructive information, which are in \textit{Set}, and types in \textit{Prop}, that are seen as "comments", to be erased when performing program extraction. As a consequence, it is not possible to eliminate an object in \textit{Prop} to construct something in \textit{Set}, which is exactly what we wanted to do when defining function \textit{next state} above. Therefore, we have been led to define \textit{micro} in \textit{Set}, so as to have all elimination principles we need to mechanically derive proofs about the shape of protocol execution traces. 6 Conclusion In this paper, we have presented the formal specification of an original micro-payment protocol, and implemented its specification in the Coq proof assistant. In doing this, we have modified Paulson's approach by introducing a notion of internal state of the agents, and we have established some well-formedness properties of the encoding w.r.t. the novelty of state management. The fact that we have to manipulate some internal knowledge of the agents along the execution of the protocol is not due to the kind of protocol we have examined, but rather to the design choices that have been made in defining the protocol. However, this specificity has been smoothly integrated into the framework used by Paulson, which demonstrates the flexibility of the inductive approach for the verification of cryptographic protocols. Using Coq instead of Isabelle/HOL has made it possible to reason about the formalisation within the prover rather easily. The theorem we have proved in paragraph 5.2 above captures a property of the constructors of the inductive type we use to represent traces. As we have seen, this comes quite naturally in Coq, using dependent types. It would be interesting to study how this kind of proofs could be formalised in Isabelle/HOL. It could be the case that to achieve this, the user would in some way have to deal with the technicalities involved in the object-level development of inductive types in Isabelle/HOL [Pau94a]. It is no surprise, anyway, that proofs about the object-level formalisation bring to light the rather strong differences between Coq and Isabelle's underlying frameworks. Still, the general approach defined by Paulson can be adopted without much conceptual differences in both systems. We have found it important to define a mechanism to guarantee that the management of state is done in a safe way in the rules of our protocol, mostly because this feature is original w.r.t. previous works. It would be interesting, more generally, to find out whether other properties about the representation of the protocol could be formalised and verified within the prover, so as to be able not only to reason about the protocol, but also about its representation. It is indeed common knowledge that many important decisions in the formal proof activity are taken when specifying the entities one shall reason about: on one hand, oversimplifying the representation of a problem can make its verification easy (but questionable as far as the meaning of the resulting proof is concerned), and on the other hand, specifying too many aspects of a problem may prevent any formal proof to be completed. Tools to study the specification itself can help in finding a compromise in this context. References 15
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02101908/file/RR2001-31.pdf", "len_cl100k_base": 10118, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 80785, "total-output-tokens": 12267, "length": "2e13", "weborganizer": {"__label__adult": 0.0006022453308105469, "__label__art_design": 0.0004973411560058594, "__label__crime_law": 0.001300811767578125, "__label__education_jobs": 0.0010843276977539062, "__label__entertainment": 0.0001537799835205078, "__label__fashion_beauty": 0.0002944469451904297, "__label__finance_business": 0.003299713134765625, "__label__food_dining": 0.0006003379821777344, "__label__games": 0.001514434814453125, "__label__hardware": 0.00228118896484375, "__label__health": 0.0017242431640625, "__label__history": 0.0005087852478027344, "__label__home_hobbies": 0.0001819133758544922, "__label__industrial": 0.0010967254638671875, "__label__literature": 0.0004963874816894531, "__label__politics": 0.0006256103515625, "__label__religion": 0.0006504058837890625, "__label__science_tech": 0.443359375, "__label__social_life": 0.0001195073127746582, "__label__software": 0.0132904052734375, "__label__software_dev": 0.5244140625, "__label__sports_fitness": 0.0003578662872314453, "__label__transportation": 0.0010242462158203125, "__label__travel": 0.000270843505859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45015, 0.02546]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45015, 0.31125]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45015, 0.87173]], "google_gemma-3-12b-it_contains_pii": [[0, 1030, false], [1030, 1185, null], [1185, 2920, null], [2920, 6376, null], [6376, 9794, null], [9794, 12362, null], [12362, 15860, null], [15860, 19516, null], [19516, 22651, null], [22651, 25877, null], [25877, 26995, null], [26995, 29851, null], [29851, 30859, null], [30859, 34189, null], [34189, 37539, null], [37539, 40833, null], [40833, 43556, null], [43556, 45015, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1030, true], [1030, 1185, null], [1185, 2920, null], [2920, 6376, null], [6376, 9794, null], [9794, 12362, null], [12362, 15860, null], [15860, 19516, null], [19516, 22651, null], [22651, 25877, null], [25877, 26995, null], [26995, 29851, null], [29851, 30859, null], [30859, 34189, null], [34189, 37539, null], [37539, 40833, null], [40833, 43556, null], [43556, 45015, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45015, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45015, null]], "pdf_page_numbers": [[0, 1030, 1], [1030, 1185, 2], [1185, 2920, 3], [2920, 6376, 4], [6376, 9794, 5], [9794, 12362, 6], [12362, 15860, 7], [15860, 19516, 8], [19516, 22651, 9], [22651, 25877, 10], [25877, 26995, 11], [26995, 29851, 12], [29851, 30859, 13], [30859, 34189, 14], [34189, 37539, 15], [37539, 40833, 16], [40833, 43556, 17], [43556, 45015, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45015, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
ad798c37799c7305f261b335d730f8bf5d7c76a8
PAXQuery: Efficient Parallel Processing of Complex XQuery Jesús Camacho-Rodríguez, Dario Colazzo, Ioana Manolescu To cite this version: Jesús Camacho-Rodríguez, Dario Colazzo, Ioana Manolescu. PAXQuery: Efficient Parallel Processing of Complex XQuery. IEEE Transactions on Knowledge and Data Engineering, Institute of Electrical and Electronics Engineers, 2015, 27 (7), pp.1977 - 1991. <http://www.computer.org/web/tkde>. <10.1109/TKDE.2015.2391110>. <hal-01162929> HAL Id: hal-01162929 https://hal.archives-ouvertes.fr/hal-01162929 Submitted on 11 Jun 2015 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. PAXQuery: Efficient Parallel Processing of Complex XQuery Jesús Camacho-Rodríguez, Dario Colazzo, and Ioana Manolescu Abstract—Increasing volumes of data are being produced and exchanged over the Web, in particular in treestructured formats such as XML or JSON. This leads to a need of highly scalable algorithms and tools for processing such data, capable to take advantage of massively parallel processing platforms. This work considers the problem of efficiently parallelizing the execution of complex nested data processing, expressed in XQuery. We provide novel algorithms showing how to translate such queries into PACT, a recent framework generalizing MapReduce in particular by supporting many-input tasks. We present the first formal translation of complex XQuery algebraic expressions into PACT plans, and demonstrate experimentally the efficiency and scalability of our approach. Index Terms—XQuery processing, XQuery parallelization, XML data management. 1 INTRODUCTION To scale data processing up to very large data volumes, platforms are increasingly relying on implicit parallel frameworks [9], [20], [51]. The main advantage of using such frameworks is that processing is distributed across many sites without the application having to explicitly handle data fragmentation, fragment placement etc. By far the most widely adopted framework, MapReduce [20] features a very simple processing model consisting of two operations, Map which distributes processing over sets of (key, value) pairs, and Reduce which processes the sets of results computed by Map for each distinct key. However, the simplicity of this processing model makes complex computations hard to express. Therefore, high-level data analytics languages such as Pig [39], Hive [48] or Jaql [12], that are translated (compiled) into MapReduce programs, have emerged. Still, complex processing translates to large and complex MapReduce programs, which may miss parallelization opportunities and thus execute inefficiently. Recently, more powerful abstractions for implicitly parallel data processing have emerged, such as the Resilient Distributed Datasets [51] or Parallelization Contracts [9] (PACT, in short). In particular, PACT pushes the idea of MapReduce further by (i) manipulating records with any number of fields, instead of (key, value) pairs, (ii) enabling the definition of custom parallel operators by means of second-order functions, and (iii) allowing one parallel operator to receive as input the outputs of several other such operators. Due to its declarative nature, a PACT program can have multiple physical execution plans with varying performance. At compile time, the compiler chooses an optimal strategy (plan) that maximizes parallelisation opportunities, and thus efficiency. The PACT model lies at the core of the Stratosphere platform [47], which can read data from and write data to the Hadoop Distributed File System (HDFS) [3]. In this work, we are interested in the implicit parallelization of XQuery [43], the W3C’s standard query language for XML data. The language has been recently enhanced with features geared towards XML analytics [22], such as explicit grouping. Given a very large collection of documents, evaluating an XQuery query that navigates over these documents and also joins results from different documents raises performance challenges, which may be addressed by parallelism. In contrast with prior work [13], [19], [29], we are interested in implicit parallelism, which does not require the application (or the user) to partition the XML input nor the query across many nodes. The contributions of this work are the following: 1) We present a novel methodology for massively parallel evaluation of XQuery, based on PACT and previous research in algebraic XQuery optimization. 2) We provide a translation algorithm from the algebraic operators required by a large powerful fragment of XQuery into operators of the PACT parallel framework. This enables parallel XQuery evaluation without requiring data or query partitioning effort from the application. Toward this goal, we first map XML data instances into PACT nested records, to ensure XML query results are returned after the PACT manipulations of nested records. Second, we bridge the gap between the XQuery algebra, and in particular, many flavors of joins [21], [34], [35] going beyond simple conjunctive equality joins, and PACT operators which (like MapReduce) are fundamentally designed around the equality of key values in their inputs. Our translation of complex joins is of interest beyond the XQuery context, as it may enable compiling other high-level languages [12], [39], [48] into PACT and other models, and thus, their efficient parallelization by platforms such as Stratosphere [47] or Spark [4]. 3) We fully implemented our translation technique into our PAXQuery platform. We present experiments demonstrating that our translation approach (i) effectively parallelizes XQuery evaluation taking advantage of the PACT framework, and (ii) scales well beyond alternative approaches for implicitly parallel XQuery evaluation, in particular as soon as joins across documents are present in the workload. It is worth observing that, thanks to XML flexibility, PAXQuery can be exploited for efficiently processing large amount of heterogeneous data, going from relational to JSON data. While JSON data can be easily and efficiently encoded into XML data in a streaming fashion, well established techniques exist to efficiently map rational data into XML data (e.g., [24], [32]); actually, a basic encoding of tables to flat XML files would suffice, as PAXQuery is able to efficiently perform various kind of joins, in order to recombine data coming from XML documents corresponding to different tables. The remainder of the paper is organized as follows. Section 2 introduces the problem by means of an example. Section 3 provides background on XML, XQuery, and the PACT model. Section 4 overviews our complete solution and characterizes the XQuery algebras targeted by our translation. Section 5 presents the translation algorithm from XQuery plans to PACT, at the core of this work. Section 6 describes our experimental evaluation. Section 7 discusses related work and then we conclude. 2 Motivation Example 1. Consider the following XQuery that extracts the name of users, and the items of their auctions (if any): ``` let $spc := collection('people'), $cc := collection('closed_auctions') for $sp in $spc/site/people/person, $id in $sp/@id let $n := $sp/name let $r := for $c in $cc//closed_auction, $b in $c/buyer/@person, $s in $c/seller/@person let $a := $c/itemref where $l = $b or $l = $s return $a return <res>{$n,$r}</res> ``` We would like to evaluate this query over two large collections of documents (concerning people, respectively closed auctions) stored in HDFS. Evaluating the query in a massively parallel fashion as previously proposed e.g. in [29] requires the programmer to explicitly insert parallelization primitives for each query, and then we conclude. 3 Background In the following, we provide background on the XML data model and XQuery dialect we target (Section 3.1), and the PACT programming model used by Stratosphere (Section 3.2). 3.1 XML and XQuery fragment XML data. We view XML data as a forest of ordered, node-labeled, unranked trees, as outlined by the simple grammar: ``` Tree d ::= s_0 | l/[f] Forest f ::= [] | f / f | d ``` A tree $d$ is either a text node ($s_i$), or an element node having the label $l_i$ and a forest of children; in accordance with the W3C’s XML data model, each node is endowed with a unique identity, which we materialize through the $i$ index. A forest $f$ is a sequence of XML trees; () denotes the empty forest. For the sake of presentation we omitted attributes in our grammar. **XQuery dialect.** We consider a representative subset of the XQuery 3.0 language [43]. Our goal was to cover (i) the main navigating features of XQuery, and (ii) key constructs to express analytical style queries e.g. aggregation, explicit grouping, or rich comparison predicates. However, extensions to support other XQuery constructs e.g. if or switch expressions, can be integrated into our proposal in a straightforward manner. The full presentation of our XQuery dialect, including the grammar, can be found in Appendix A. Figure 2 provides three sample queries. A path starts from the root of each document in a collection found at URI \( \text{Uri} \), or from the root of one document at URI \( \text{Uri} \), or from the bindings of a previously introduced variable. The path expression dialect **Path** belongs to the XPath \( \{0,1\}\) language [37]. We support two different types of comparators in predicates: (ValCmp) to compare atomic values, and (NodeCmp) to compare nodes by their identity. Finally, the **group by** clause groups tuples based on variable values. In Figure 2, queries \( Q_1 \) and \( Q_2 \) use only one collection of documents while query \( Q_3 \) joins two collections. Further, \( Q_2 \) and \( Q_3 \) construct new XML elements while \( Q_1 \) returns the result of an aggregation over nodes from the input documents. ### 3.2 PACT framework The PACT model [9] is a generalization of MapReduce, based on the concept of parallel data processing operators. PACT plans are DAGs of **implicit parallel operators**, that are optimized and translated into **explicit parallel data flows** by Stratosphere. We introduce below the PACT data model and formalize the semantics of its operators. **Data model.** PACT plans manipulate **records** of the form: \[ ( (f_1, f_2, \ldots, f_n), \{ (i_1, i_2, \ldots, i_k) \} ) \] where \( 1 \leq k \leq n \) and: - \( (f_1, f_2, \ldots, f_n) \) is a list of fields \( f_i \). In turn, a field \( f_i \) is either an atomic value (string) or a list \( (r'_1, \ldots, r'_m) \) of records. - \( \{ (i_1, i_2, \ldots, i_k) \} \) is a possibly empty list of record positions in \([1 \ldots n]\) indicating the key fields for the record. Each of the key fields must be an atomic value. **Annotations and compiler hints** characterize the UF behaviour. We describe these next. 1) **Parallelization contract.** A PACT can have \( k \geq 1 \) inputs, each of which is a finite bag of records. The contract determines how input records are organized into groups. 2) **User function.** The UF is executed independently over each bag of records created by the parallelization contract, therefore these executions can take place in parallel. For each input bag of records, the UF returns a bag of records. 3) **Annotations and/or compiler hints** may be used to enable optimizations (with no impact on the semantics), thus we do not discuss them further. The semantics of the PACT \( op \) given as input \( k \) bags of records \( I_1, \ldots, I_k \), with \( I_i \subset \mathcal{R} \), \( 1 \leq i \leq k \), and having the parallelization contract \( c \) and the user function \( f \) is: \[ \text{op}(I_1, \ldots, I_k) = \bigcup_{s \in \mathcal{C}(I_1, \ldots, I_k)} f(s) \] In the above, \( c \) builds bags of records by grouping the input records belonging to bags \( I_1, \ldots, I_k \); \( f \) is invoked on each bag produced by \( c \), and the resulting bags are unioned. **Predefined contracts.** Although the PACT model allows creating custom parallelization contracts, a set of them for the most common cases is built-in: - **Map** has a single input, and builds a singleton for each input record. Formally, given the bag \( I_1 \subset \mathcal{R} \) of records, Map is defined as: \[ \text{Map}(I_1) = \{ \{ r \} \mid r \in I_1 \} \] Numerous logical algebras have been proposed for XQuery [10], [21], [34], [42]. While the language has a functional flavor, most algebras decompose the processing of a query into operators, such as: navigation (or tree pattern matching), which given a path (or tree pattern) query, extracts from a document tuples of nodes matching it; selection; projection; join etc. A significant source of XQuery complexity comes from nesting: an XQuery expression can be nested in almost any position within another. In particular, nested queries challenge the optimizer, as straightforward translation into nested plans leads to very poor performance. For instance, in Figure 2, \( Q_3 \) contains a nested subquery for \( \$t \) ... \( \$t \) (shown indented in the figure); let us call it \( Q_4 \) and write \( Q_3 = e(Q_4) \). A naïve algebraic expression of such a query would evaluate \( Q_4 \) once per result of \( e \) in order to compute \( Q_3 \) results, which is typically inefficient. Efficient optimization techniques translate nested XQuery into unnested plans relying on joining and grouping [7], [21], [35]. Thus, a smarter method to represent such query is to connect the sub-plans of \( Q_4 \) and \( e \) with a \textit{join} in the plan of \( Q_3 \); the join condition in this example is \( \$b=\$i \). Depending on the query shape, such \textit{decorrelating} joins may be \textit{nested} and/or \textit{outer}. Our goal is to complement existing engines, which translate from XQuery to an internal algebra, by an efficient compilation of this algebra into an implicit parallel framework such as PACT. This enables plugging a highly parallel back-end to an XQuery engine to improve its scalability. Accordingly, we aim to adapt to any XML query algebra satisfying the following two assumptions: - The algebra is tuple-oriented (potentially using nested tuples). - The algebra is rich enough to support decorrelated (unnested) plans even for nested XQuery; in particular we consider that the query plan has been unnested before we start translating it into PACT. Three observations are of order here. First, to express complex queries without nesting, the algebra may include any type of joins (conjunctive/disjunctive, value or identity-based, possibly nested, possibly outer), as well as grouping; accordingly, we must be able to translate all such operators into PACT. Second, a tuple-based algebra for XQuery provides border operators for (i) creating tuples from XML trees, in leaf operators of the algebraic plan; (ii) constructing XML trees out of tuples, at the top of the algebraic plan, so that XML results can be returned. Finally, we require no optimization but unnesting [35] to be applied on the XML algebraic plan before translating it to PACT; however, any optimization \textit{may} be applied before (and orthogonal to) our translation. ### 4.2 Algebra and data model In the sequel, we present our work based on the algebra in [34]. We describe the nested tuple data model manipulated by this algebra, then present its operators. **Nested tuples data model for XML.** The data model extends the W3C’s XPath/XQuery data model with \textit{nested tuples} to facilitate describing algebraic operations. Formally, a tuple $t$ is a set of variable-value pairs: \[ \{(\$V_1,v_1), (\$V_2,v_2), \ldots, (\$V_k,v_k)\} \] where the variable names $\$V_i$ are all distinct, and each value $v_i$ is either (i) an item, which can be an XML node, atomic value or $\bot$, or (ii) an homogeneous collection of tuples (see below). Three flavours of collections are considered, namely: lists, bags and sets, denoted as $(t_1,t_2,\ldots,t_n)$, $\{(t_1,t_2,\ldots,t_n)\}$, and $\{t_1,t_2,\ldots,t_n\}$, respectively. Tuple schemas are needed for our discussion. The schema $S$ of a tuple $t$ is a set of pairs $\{(\$V_1,S_1),\ldots,(\$V_k,S_k)\}$ where each $S_i$ is the schema of the value of the variable $\$V_i$. Three operators started from the leaves. The XML scan operators take as input the ‘people’ (respectively ‘closed_auctions’) XML forests and create a tuple out of each tree in them. XML scan is one of the border operators. XPath and XQuery may perform navigation, which, in a nutshell, binds variables to the result of path traversals. Navigation is commonly represented through tree patterns, whose nodes carry the labels appearing in the paths, and where some target nodes are also annotated with names of variables to be bound, e.g. $\$pc$, $\$i$ etc. The algebra we consider allows to consolidate as many navigation operations from the same query as possible within a single navigation tree pattern, and in particular navigation performed outside of the for clauses [7], [21], [36]. Large navigation patterns lead to more efficient query execution, since patterns can be matched very efficiently against XML documents; for instance, if the pattern only uses child and descendant edges, it can be matched in a single pass over the input [17]. In the spirit of generalized tree patterns [18], annotated tree patterns [40], or XML access modules [6], we assume a navigation (nav) operator parameterized by an extended tree pattern (ETP) supporting multiple returning nodes, child and descendant axis, and nested and optional edges. The XML construction (construct$_L$) is the border operator responsible for transforming a collection of tuples to XML forests [25], [46]. The information on how to build the XML forest is specified by a list $L$ of construction tree patterns (CTPs in short), attached to the construct$_L$ operator. For each tuple in its input, construct$_L$ builds one XML tree for each CTP in $L$ [34]. In our example, $L$ contains a single CTP that generates for each tuple an XML tree consisting of elements of the form $<res>\{\$n,\$r\}</res>$. We omit further details here; the interested reader may find them in Appendix B. Example 1 (continuation). The algebraic plan corresponding to the XQuery introduced in Section 2 is shown in Figure 5. For simplicity, we omit the variable types in the operators schema and only show the variable names. We discuss the operators starting from the leaves. The XML scan operators take as input the ‘people’ (respectively ‘closed_auctions’) XML forests and create a tuple out of each tree in them. XML scan is one of the border operators. The algebraic representation of XQuery. In the following, we introduce the translation process and the main operators by example. A methodology for translating our XQuery dialect to the algebra we consider was described in [7], and detailed through examples in [33]. The complete list of algebra operators and their semantics can be found in Appendix B. Full operator set. We briefly comment below on the rest of operators that are handled by our translation. The rest of unary operators are very close to their known counterparts in nested relational algebra. These are flatten (flat$_p$) which unnests tuples, selection (sel$_p$) based on a predicate $\rho$, projection (proj$_V$), aggregation (agg$_p,a,r$) computing the usual aggregates over (nested) records, and value-based duplicate elimination (dupelim$_V$). One operator that is slightly different is group-by ($grpG_{d,G_e,s,r}$). In order to conform to Translation rules. Fig. 6. Data model translation rules. XML semantics, the operator may group by identity based on the variables in \( G_{\text{id}} \), and/or by value on the variables in \( G_v \) [21], [34]. Binary operators include the usual cartesian product (prod), join (\( \text{join} \)), outer join (\( o\text{join} \)) and nested outer join (\( n\text{join} \)). 5 XML ALGEBRA TO PACT Within the global approach depicted in Figure 4, this section describes our contribution: translating (i) from the Extended XQuery Data Model (or EXDM, in short) into the PACT Data Model (Section 5.1) and (ii) from algebraic expressions into PACT plans (Section 5.2). The most complex technical issues are raised by the latter. XQuery algebraic plans are translated into PACT plans recursively, operator by operator; for each XQuery operator, the translation outputs one or several PACT operators for which we need to choose (i) the parallelization contract (and possibly its corresponding key fields), and (ii) the user function, which together determine the PACT behavior. The hardest to translate are those algebraic operators whose input cannot be fragmented based on conjunctive key equalities (e.g., disjunctive joins). This is because all massively parallel operators in PACT are based on key equality comparisons [9]. Translation rules. As in [42], we use deduction rules to specify our translation. In a nutshell, a deduction rule describes how the translation is performed when certain conditions are met over the input. Our rules rely on translation judgments, noted as \( J, J_1 \), and are of the form: \[ \text{cond} \quad J_1 \ldots J_n \] stating that the translation \( J \) (conclusion) is recursively made in terms of translations \( J_1 \ldots J_n \) (premises) when the (optional) condition \( \text{cond} \) holds. The translation judgments \( J_1 \) are optional; their absence denotes that the rule handles the “fixpoint” (start of the recursive translation). 5.1 Translating XML tuples into PACT records Rules for translating instances of EXDM into those of PACT rely on translation judgments of the form \( t \stackrel{\lambda}{\rightarrow} r \) or: “the EXDM instance \( t \) translates into the PACT record \( r \)”. The translation rules appear in Figure 6, where \( + \) denotes record concatenation. Rules produce records whose key fields are not set yet; as we will see in Section 5.2, the keys are filled in by the translation. Rule (TUPLE) produces a record from a tuple: it translates each tuple value, and then builds the output record \( r \) by concatenating the results according to tuple order. There are three rules that can be triggered by rule (TUPLE). First, rule (XMLNODE) translates an XML node into a record with two fields: the first one contains the XML ID, while the second is the text serialization of the XML tree rooted at the node. In turn, rule (ATOMICVALUE) translates an XML value. Finally, rule (COLLVALUE) translates a tuple collection into a single-field record that contains the nested collection of records corresponding to the tuples in the input. 5.2 Translating algebraic expressions to PACT Rules for translating an algebraic expression into a PACT plan are based on judgments of the form \( A \Rightarrow \mathcal{P} \), or: “\( A \) translates into a PACT plan \( \mathcal{P} \)”. All rules are defined recursively over the structure of their input \( A \); for instance, the translation of \( A = \text{sel}_{\lambda}(A') \) relies on the PACT plan \( \mathcal{P}' \) resulting from the translation of the smaller expression \( A' \), and so on. The specific behavior of each rule is encoded in the choice of the parallelization contracts (and corresponding keys) and the user functions, so this is what we comment on below. Preliminaries. In the translation, we denote a PACT operator by its parallelization contract \( c \), user function \( f \) and the list \( K \) of key field positions in the PACT input. In particular: - a unary PACT is of the form \( cf^K \); if \( K = \emptyset \), for simplicity we omit it and use just \( cf \); - a binary PACT is of the form \( cf^{K_1,K_2} \), assuming that the key of the left input records consists of the fields \( K_1 \) and that of the right input records of \( K_2 \), respectively. To keep track of attribute position through the translation, we use a set of helper functions associating to variables... Fig. 7. Border operators translation rules. \[ A \Rightarrow P \quad S_A; e \Rightarrow e' \quad f := \text{nav}(e') \quad \text{(NAVIGATION)} \] \[ \text{nav}_e(A) \Rightarrow \text{mp}_f(P) \] \[ S_A; G_{id} \Rightarrow G'_{id} \quad S_A; G_v \Rightarrow G'_v \quad K := G'_{id} + G'_v \quad f := \gamma \gamma \gamma (K) \] \[ \gamma \gamma \gamma G_{id}, G_v, s_r(A) \Rightarrow \text{rd}_f'(P) \] \[ A \Rightarrow P \quad S_A; p \Rightarrow p_i \quad f := \text{flat}(pi) \quad \text{(FLATTEN)} \] \[ \text{flat}_p(A) \Rightarrow \text{mp}_f(P) \] \[ A \Rightarrow P \quad S_A; p \Rightarrow \rho' \quad f := \text{sel}(\rho') \quad \text{(SELECTION)} \] \[ \text{sel}_\rho(A) \Rightarrow \text{mp}_f(P) \] \[ A \Rightarrow P \quad S_A; V \Rightarrow V' \quad f := \text{proj}(V') \quad \text{(PROJECTION)} \] \[ \text{proj}_V(A) \Rightarrow \text{mp}_f(P) \] \[ \text{if } p\.\text{length} \neq 1 \text{ then } f := \text{agg}(pi, a) \quad U := \text{mp}_f \text{ else } K := 0 \quad f := \gamma \gamma \gamma (pi, a) \quad U := \text{rd}_f' \quad \text{(AGGREGATION)} \] \[ \text{agg}_{p, s_r}(A) \Rightarrow U (P) \] \[ A \Rightarrow P \quad S_A; V \Rightarrow v, K \quad f := \text{dupelim} \quad \text{dupelim}_v(A) \Rightarrow \text{rd}_f'(P) \quad \text{(DUPELIM)} \] Fig. 8. Unary operators translation rules. from $S$, the index positions of the corresponding fields in the PACT records. These functions are outlined in Table 1; we use the term $S$-records as a shortcut for records obtained by translating tuples that conform to schema $S$. The helper functions implementation details are quite straightforward. 5.2.1 Border operators translation Figure 7 outlines the translation of border operators. Rule (CONSTRUCTION) translates the logical $\text{construct}_L$ operator into a data sink that uses our output function $\text{xmlwrite}$. For each input record from $P$, $\text{xmlwrite}$ generates XML content using the list of construction patterns in $L'$ and writes the results to a file. Rule (SCAN) translates the logical operator $\text{scan}_f$ into a data source built up by means of our input function $\text{xmlscan}$. For each XML document in $f$, $\text{xmlscan}$ returns a single-field record holding the content of the document. 5.2.2 Unary operators translation Unary operators are translated by the rules in Figure 8. Rule (NAVIGATION) uses an auxiliary judgment that translates the input ETP $e$ into $e'$ using $S_A$. Navigation is applied over each record independently, and thus we use a PACT with a Map contract. The UF is $\text{nav}$, which generates new records from the (possibly partial) embeddings of $e'$ in each input record. Rule (GROUP-BY) translates a group-by expression into a PACT with a Reduce contract, as the records need to be partitioned by the value of their grouping fields. The fields in $K$, which form the key used by the Reduce contract, are obtained appending $G'_v$ to $G'_u$. $K$ is also handed to the $\gamma \gamma \gamma$ UF, which creates one record from each input collection of records. The new record contains the values for each field in $K$, and a new field which is the collection of the input records themselves. Example 2. The following XQuery groups together the people that share interest in the same auctions: \[ \text{let } spc := \text{collection('people')} \\ \text{for } sp \in \text{spc/people/person,} \\ \text{so in sp/watches/watch/@open_auction} \\ \text{let } sn := \text{sp/name} \\ \text{group by } so \\ \text{return } \text{<rea}<a> (so) </a> (sn) </rea> \] The XML algebraic expression generated from this query is shown in Figure 9a. Using the judgments in Figure 8, the expression is translated into the PACT plan of Figure 9b. Observe that the grouping variable $so$ is translated into field position 5, since: i) two record fields are created for each of the first variables $spc$ and $sp$ (rules (TUPLE) and (XMLNODE) in Figure 6) and ii) the subsequent two fields correspond to the id-value pair for $so$; the mapping of $S_2$ tuples into PACT records is shown in Figure 10 (the key field is highlighted). The same holds for the encoding of fields used in other figures. Rule (FLATTEN) translates a flatten expression into a Map PACT, that applies the flattening UF $\text{flat}$ on each input record independently. The path $pi$ to the nested collection is obtained from $p$ using $S_A$. Rule (SELECTION) produces a Map PACT that applies the selection to each record produced by $P$. Selection is performed by the $\text{sel}$ UF, which uses the filtering condition $\rho'$ obtained from $\rho$ and $S_A$. Fig. 9. Logical expression (a) and corresponding PACT plan (b) for the query in Example 2. Fig. 10. Example of tuple representation in PACT. Fig. 11. Cartesian product and conjunctive equi-join translation rules. Rule (PROJECTION) translates a projection expression into a PACT using a Map contract. The positions $V'$ of the fields that should be kept by the projection are obtained from $V$ using the schema $S_A$. The translation of (AGGREGATION) is interesting as it can use one PACT or another, depending on the path $p$ to the variable being aggregated. If the variable is contained in a nested collection, i.e., $p.length \neq 1$, we produce a PACT with a Map contract; for each input record, the \texttt{Map} UF executes the aggregation operation $a$ over the field pointed by $pi$ and outputs a record with the aggregation results. Otherwise, if the aggregation is executed on the complete input collection, we use a Reduce contract wrapping the input in a single group. The \texttt{Map} UF creates an output record having (i) a field with a nested collection of all input records and (ii) a field with the result of executing the aggregation $a$ over the field pointed by $pi$. Finally, rule (DUPLÉLIM) translates a duplicate elimination expression into a PACT with a Reduce contract. Each group handed to the UF holds the bag of records containing the same values in the fields pointed by $K$; the duplicate elimination UF, denoted by \texttt{dupelim}, outputs only one record from the group. 5.2.3 Binary operators translation The rules are depicted in Figure 11; we assume that the inputs $A_1$ and $A_2$ of the algebraic binary operator translate into the PACT plans $P_1$ and $P_2$. a) Cartesian product. This operator requires the simple \texttt{concatenation} UF, taking as input a pair of records, and outputting their concatenation: $\texttt{concat}(r_1, r_2) = r_1 + r_2$. Rule (\texttt{CartesianProduct}) translates a cartesian product into a Cross PACT with a \texttt{UF}. b) Joins with conjunctive equality predicates. This family comprises joins on equality predicates, which can be simple (natural) equi-joins, or outer joins (without loss of generality we focus on left outer joins). b.1) Conjunctive equi-join. The conjunctive equi-join operator is translated by rule ($\land$ \texttt{JOIN}), as follows. First, the predicate $\rho$ over $A_1$ and $A_2$ translates into a predicate $\rho'$ over records produced by $P_1$ and $P_2$. Then, the list of fields pointed by the left ($\rightarrow_l$), resp. right ($\rightarrow_r$) of the condition $\rho'$ are extracted, and finally they are used as the keys of the generated Match PACT. b.2) Left outer conjunctive equi-join. In the rule (LO $\land$ \texttt{JOIN}), the output PACT is a CoGroup whose keys are taken from the fields of the translated join predicate $\rho'$. The CoGroup contract groups the records produced by $P_1$ and $P_2$ sharing the same key. Then, the \texttt{concat} UF that we describe next is applied over each group, to produce the expected result. \textbf{Definition 1 (\texttt{concat})}: The left outer concatenation UF, \texttt{concat}, of two record bags $\{(r_1, \ldots, r_x)\}$ and $\{(r'_1, \ldots, r'_y)\}$ is defined as: - If $y \neq 0$, the cartesian product of the two bags. - Otherwise, $\{(r_1+\perp', \ldots, r_x+\perp')\}$ i.e., concatenate each left input record with a $\perp$-record having the schema (structure) of the right records. b.3) Nested left outer conjunctive equi-join. Similar to the non-nested case, rule (NLO $\land$ \texttt{JOIN}) translates the nested left outer conjunctive equi-join into a CoGroup PACT whose key is extracted from $\rho'$. However, we need a different UF in order to generate the desired right-hand side nested records, and we define it below. \textbf{Definition 2 (\texttt{concat})}: The nested left outer concatenation UF, \texttt{concat}, of the bags $\{(r_1, \ldots, r_x)\}$ and $\{(r'_1, \ldots, r'_y)\}$ is defined as: - If $y \neq 0$, $\{(r_1+(r'_1, \ldots, r'_y), \ldots, r_x+(r'_1, \ldots, r'_y))\}$ i.e., nest the right set as a new field concatenated to each record from the left. - Otherwise, $\{(r_1+(\perp', \ldots, r_x+(\perp'))\}$ i.e., add to each left record a field with a list containing a $\perp$-record conforming to the schema of the right records. \textbf{Example 3}. The following XQuery extracts the name of users and the items that they bought (if any): \begin{verbatim} let $spc := collection('people'), $cc := collection('closed_auctions') for $sp in $spc/site/people/person, $i in $sp/bid let $n := $sp/name let $r := for $x in $cc/closed_auction, $b in $c/buyer/person let $a := $cc/itemref where $i = $b return $a return <res>{$n,$r}</res> \end{verbatim} The query translates into the algebraic expression depicted in Figure 12a, while the corresponding PACT plan is shown in Figure 12b. Rule (NLO $\land$ \texttt{JOIN}) translates the nested left outer conjunctive equi-join into a PACT with a CoGroup contract that groups together all records having the same values in the fields corresponding to $\$1 (K1) and $\$b (K2), and applies our \texttt{concat} UF on them. \textbf{c) Joins with disjunctive equality predicates}. Translating joins with disjunctive equality predicates is harder. The reason is that PACT contracts are centered around equality of record fields, and thus inherently not suited to disjunctive semantics. To solve this mismatch, our translation relies on using more than one PACT for each operator, as we explain below. The rules are depicted in Figure 13 **c.1) Disjunctive equi-join.** In rule $(\lor \text{JOIN}_n)$, the predicate $\rho'$ is generated from $\rho$ using $S_{A_1}$ and $S_{A_2}$. Then, for each conjunctive predicate $\rho'_k$ in $\rho'$, we create a $\text{Match}$ whose keys are the fields participating in $\rho'_k$. Observe that the UF's of these $\text{Match}$ operators should guarantee that no erroneous duplicates are generated when the evaluation of more than one conjunctive predicates $\rho'_i$, $\rho'_j$, $i \neq j$ is true for a certain record. To that purpose, we define the new UF $\text{pnjoin}$ below, parameterized by $k$ and performing a partial negative join. **Definition 3 ($\text{pnjoin}$):** Let $\rho' = \rho'_1 \lor \rho'_2 \lor \ldots \lor \rho'_n$ and $k$ be an integer, with $0 \leq k < n$. Given two records $r_1$, $r_2$, the $\text{pnjoin}(\rho', k)$ UF evaluates $\rho'_1 \lor \rho'_2 \lor \ldots \lor \rho'_k$ over $r_1$, $r_2$, and outputs $r_1 \cup r_2$ if they evaluate to false. Note that the UF ensures correct multiplicity of each record in the result. **Example 4.** The following XQuery extracts the names of users involved in at least one auction, either as buyers or sellers: ```xquery let $spc := \text{collection('people')},$ $sec := \text{collection('closed_auctions')}$ for $sp in spc/site/people/person,$i in $sp/bid,$c in $sec/closed_auction,$b in $sc/buyers/person,$s in $sc/seller/person$ let $sn := $sp/name where $s1 = $b or $s1 = $s return <res>{$sn}</res> ``` Figure 14a shows the equivalent algebraic expression, while the corresponding PACT plan is shown in Figure 14b. Rule $(\lor \text{JOIN}_n)$ translates the disjunctive equi-join into two PACTs with $\text{Match}$ contracts, one per disjunction. Observe that two distinct values (0 and 1) of $k$ are used in the $\text{pnjoin}$ UF's to prevent spurious duplicates, one for $\text{match} = \text{true}$ and one for $\text{match} = \text{false}$. **c.2) (Nested) left outer disjunctive equi-join.** The translation of the plain and nested variants of the outer disjunctive equi-join, described by the $(\text{LO} \lor \text{JOIN}_n)$ and $(\text{LNO} \lor \text{JOIN}_n)$ rules respectively, are very similar; as illustrated next, the main difference resides in the different post-processing operations they adopt. The translation of these two operators is chal- 1. Avoid the generation of duplicate records. We adopt a non trivial variation of the technique used previously for disjunctive equi-join. 2. Recognise records generated by the left hand-side expression which do not join any record coming from the right-hand side expression. We use the XML node identifiers in each left hand-side record to identify it uniquely, so that, after the parallel evaluation of each conjunction, a Reduce post-processing PACT groups all resulting combinations having the same left hand-side record; if none of such combinations exists, the left hand-side record representing a group is concatenated to a (nested) \(\perp\)-record conforming to the right input schema, and the resulting record is output; otherwise the output record(s) are generated from the combinations. In the first step, we must evaluate in parallel the joins related to predicates \(\rho'_k\). A PACT with a CoGroup contract is built for each conjunctive predicate \(\rho'_k\). Each such PACT groups together all records that share the same value in the fields pointed by \(\rho'_k\); then applies the \(\text{nopnjoin}\) UF (see below) on each group, with the goal of avoiding erroneous duplicates in the result; the UF is more complex than \(\text{pnjoin}\) though, as it has to handle the disjunction and the nesting. \(\text{nopnjoin}\) is parameterized by \(k\), as we will use it once for each conjunction \(\rho'_k\). Furthermore, \(\text{nopnjoin}\) takes as input two bags of records and is defined as follows, along the lines of \(\text{pnjoin}\). **Definition 4 \(\text{nopnjoin}\):** Let \(\rho' = \rho'_1 \lor \rho'_2 \lor \ldots \lor \rho'_n\) be a predicate where each \(\rho'_k\) is conjunctive, Given two input bags \(\{(r_1, \ldots, r_x)\}\) and \(\{(r'_1, \ldots, r'_y)\}\), the \(\text{nopnjoin}(\rho', k)\) UF is defined as follows: - If the second input is empty \((y = 0)\), return \(\{(r_1+(\perp'), \ldots, r_x+(\perp'))\}\) i.e., concatenate every left input record with a field containing a nested list of one \(\perp\)-record conforming to the schema of the right input. - Otherwise, for each left input record \(r_i\): 1. create an empty list \(c_i\); 2. for each \(r'_j, 1 \leq j \leq y\), evaluate \(\rho'_1 \lor \rho'_2 \lor \ldots \lor \rho'_k\) over \(r_i\) and \(r'_j\), and add \(r'_j\) to \(c_i\) if the result is false; 3. if \(c_i\) is empty, then insert into \(c_i\) a \(\perp\)-record with the schema of the right input; 4. output \(r_i\) concatenated with a new field whose value is \(c_i\). The second PACT produced by the \((\text{LO} \lor \text{JOIN})\) and \((\text{NLO} \lor \text{JOIN})\) rules uses a Reduce contract, taking as input the outputs of all the \text{CoGroup} operators; its key consists of the XML node identifiers in each left hand-side record (we denote by \(~\) the extraction of these fields from the schema). This amounts to grouping together the records originated from the same left input record. Depending on the join flavor though, this last PACT uses a different UF. For the plain (non-nested) join \((\text{LO} \lor \text{JOIN})\), we use the \(\text{nopost}^{1/4}\) UF producing records with an unnested right side. For the nested join \((\text{NLO} \lor \text{JOIN})\), on the other hand, the \(\text{nopost}^{1/4}\) UF is used to produce nested records. Due to space constraints, we omit the definition of these UFs here and delegate their details to Appendix C. **Example 1 (continuation).** Our algorithms translate the algebraic expression shown in Figure 5 into the PACT plan depicted in Figure 15; observe that it is the same PACT plan that was shown in less detail in Figure 1. Rule \((\text{NLO} \lor \text{JOIN})\) translates the nested left outer disjunctive equi-join into (i) two PACTs with \text{CoGroup} contracts, one for each disjunction, and (ii) a PACT with a Reduce contract that groups together records originating from the same left-hand side record, i.e., \(K\) holds field positions \#0, \#2, \#4, which contain the XML node identifiers of \$pc, \$p, \$i, respectively. **d) Joins on inequalities.** Our XQuery subset also supports joins with inequality conditions. In this case, the translation uses \text{Cross} contracts. Further, just like for joins with disjunctive predicates, the non-nested and nested outer variants of the \(\theta\)-join require more than one PACT. The corresponding translation rules can be found in Appendix D. **Syntactically complex translation vs. performance** Clearly, complex joins such as those considered in c) could be translated into a single \text{Cross} PACT over the pairs of records as in d). However, this would be less efficient and scale poorly (number of comparisons quadratic in the input size), so our experiments will demonstrate. 6 EXPERIMENTAL EVALUATION We implemented our PAXQuery translation approach in Java 1.6, and relied on the Stratosphere platform [47] supporting PACT. We first describe the experimental setup, and then present our results. **Experimental setup.** The experiments run in a cluster of 8 nodes on an 1GB Ethernet. Each node has 2 \times 2.93GHz Quad Core Xeon CPUs, 16GB RAM and two 600GB SATA hard disks and runs Linux CentOS 6.4. PAXQuery is built on top of Stratosphere 0.2.1; it stores the XML data in HDFS 1.1.2. The reported results are averaged over three runs. **XML data.** We used XMark [45] data; to study queries joining several documents, we used the split option of the XMark generator to create four collections of XML documents, each containing a specific type of XMark subtrees: users (10% of... TABLE 2 Query details. <table> <thead> <tr> <th>Query</th> <th>Collections</th> <th>Algebra operators (#)</th> <th>Parallelization contracts (#)</th> </tr> </thead> <tbody> <tr> <td>$q_1 - q_4$ users</td> <td>Navigation (1)</td> <td>Map (1)</td> <td></td> </tr> <tr> <td>$q_5$, $q_6$ closed auc.</td> <td>Navigation (1)</td> <td>Map (1)</td> <td></td> </tr> <tr> <td>$q_7$ users</td> <td>Navigation (1)</td> <td>Map (1)</td> <td></td> </tr> <tr> <td>$q_8$ closed auc.</td> <td>Navigation (1)</td> <td>Map (2) Reduce (1)</td> <td></td> </tr> <tr> <td>$q_9$ items</td> <td>Navigation (2) Aggregation (2)</td> <td>Map (6) Reduce (1) Match (1)</td> <td></td> </tr> <tr> <td>$q_{10}$ users closed auc.</td> <td>Navigation (3) Projection (2) NLO conj. equi-join (2)</td> <td>Map (5) CoGroup (2)</td> <td></td> </tr> <tr> <td>$q_{11}$ users</td> <td>Projection (1) Dup. elim. (1) NLO conj. equi-join (1)</td> <td>Reduce (1) CoGroup (1)</td> <td></td> </tr> <tr> <td>$q_{12}$ users closed auc.</td> <td>Navigation (2) Projection (1) NLO conj. equi-join/ aggregation (1)</td> <td>Map (3) CoGroup (1)</td> <td></td> </tr> <tr> <td>$q_{13}$ users closed auc.</td> <td>Navigation (2) Projection (1) NLO disj. equi-join (1)</td> <td>Map (3) Reduce (2) CoGroup (2)</td> <td></td> </tr> <tr> <td>$q_{14}$ users open auc.</td> <td>Projection (1) NLO theta-join (1)</td> <td>Map (5) Reduce (2) Cross (1)</td> <td></td> </tr> </tbody> </table> the dataset size), items (50%), open auctions (25%) and closed auctions (15%). All documents are simply stored in HDFS (which replicates them three times), that is, we do not control the distribution/allocation of documents over the nodes. XML queries. We used a subset of XMark queries from our XQuery fragment, and added queries with features supported by our dialect but absent from the original XMark, e.g. joins on disjunctive predicates; all queries are detailed in Appendix E. Table 2 outlines the queries: the collection(s) that each query carries over, the corresponding algebraic operators and their numbers of occurrences, and the parallelization contracts used in the plan generated by our translation framework. Queries $q_9$-$q_{14}$ all involve value joins, which carry over thousands of documents arbitrarily distributed across the HDFS nodes. 6.1 PAXQuery scalability Our first goal is to check that PAXQuery brings to XQuery evaluation the desired benefits of implicit parallelism. For this, we fixed a set of queries, generated 11,000 documents (34GB) per node, and varied the number of nodes from 1 to 2, 4, 8 respectively; the total dataset size increases accordingly in a linear fashion, up to 272GB. Figure 16 shows the response times for each query. Queries $q_1$-$q_6$ navigate in the input document according to a given navigation pattern of 5 to 14 nodes; each translates into a Map PACT, thus their response time follows the the size of the input. These queries scale up well; we see a moderate overhead in Figure 16 as the data volume and number of nodes increases. Queries $q_7$ and $q_8$ apply an aggregation over all the records generated by a navigation. For both queries, the navigation generates nested records and the aggregation consists on two steps. The first step goes over the nested fields in each input record, and thus it uses a Map contract. The second step is executed over the results of the first. Therefore, a Reduce contract that groups together all records coming from the previous operator is used. Since the running time is dominated by the Map PACTs which parallelize very well, $q_7$ and $q_8$ also scale up well. Queries $q_{9}$-$q_{12}$ involve conjunctive equi-joins over the collections. Query $q_{13}$ executes a NLO disjunctive equi-join, while $q_{14}$ applies a NLO theta-join. We notice a very good scaleup for $q_9$-$q_{13}$, whose joins are translated in many PACTs (recall the rules in Figure 13). In contrast, $q_{14}$, which translates into a Cross PACT, scales noticeably less well. This validates the interest of translating disjunctive equi-joins into many PACTs (as our rules do), rather than into a single Cross, since, despite parallelization, it fundamentally does not scale. 6.2 Comparison against other processors To evaluate the performance of our processor against existing alternatives, we started by comparing it on a single node with other popular centralized XQuery processors. The purpose is to validate our choice of an XML algebra as outlined in Section 4.2 as input to our translation, by demonstrating that single-site query evaluation based on such an algebra is efficient. For this, we compare our processor with BaseX 7.7 [8], Saxon-PE 9.4 [44] and Qizx/open 4.1 [41], on a dataset of 11,000 XML documents (34GB). Table 3 shows the response times for each query and processor; the shortest time is shown in bold, while OOM stands for out of memory, and TO for timeout (above 2 hours). In Table 3, we identify two query groups. First, $q_1$-$q_8$ do not feature joins; while the performance varies across systems, only BaseX and PAXQuery are able to run all these queries. PAXQuery outperforms other systems because, compiled in PACT, it is able to exploit the multicore architecture. In the second group, queries $q_9$-$q_{14}$ join across the documents. None of the competing XQuery processors completes their evaluation, while PAXQuery executes them quite fast. For these, the usage of outer joins and multicore parallelization are key to this good performance behavior. We next compare our system with other alternatives for implicitly parallel evaluation of XQuery. As explained in the Introduction, no comparable system is available yet. Therefore, for our comparison, we picked the BaseX centralized system (the best performing in the experiment above) and used Hadoop-MapReduce on one hand, and Stratosphere-PACT on the other hand, to parallelize its execution. We compare PAXQuery, relying on the XML algebra-to-PACT translation we described, with the following alternative architecture. We deployed BaseX on each node, and parallelized XQuery execution as follows: 1) Manually decompose each query into a set of leaf subqueries performing just tree pattern navigation, followed by a recomposition subquery which applies (possibly nested, outer) joins over the results of the leaf subqueries; 2) Parallelize the evaluation of the leaf subqueries through one Map over all the documents, followed by one Reduce to union all the results. Moreover, if the recomposition query is not empty, start a new MapReduce job running the recomposition XQuery query over all the results thus obtained, in order to compute complete query results. This alternative architecture is in-between ChuQL [29], where the query writer explicitly controls the choice of Map and Reduce keys, i.e., MapReduce is visible at the query level, and PAXQuery where parallelism is completely hidden. In this architecture, \( q_1 \sim q_8 \) translate to one Map and one Reduce, whereas \( q_9 \sim q_{14} \) feature joins which translates into a recomposition query and thus a second job. Table 4 shows the response times when running the query on the 8 nodes and 272GB; the shortest time is in bold. First, we notice that BaseX runs 2 to 5 times faster on Stratosphere than on Hadoop. This is due to Hadoop’s checkpoints (writing intermediary results to disk) while Stratosphere currently does not, trading reliability for speed. For queries without joins \((q_1 \sim q_8)\), PAXQuery is faster for most queries than BaseX on Hadoop or Stratosphere; this simply points out that our in-house tree pattern matching operator (physical implementation of \( nav \)) is more efficient than the one of BaseX. Queries with joins \((q_9 \sim q_{14})\) fail in the competitor architecture again. The reason is that intermediary join results grow too large and this leads to an out-of-memory error. PAXQuery evaluates such queries well, based on its massively parallel (outer) joins. ### 6.3 Conclusions of the experiments Our experiments demonstrate the efficiency of an XQuery processor built on top of PACT. First, our scalability evaluation has shown that the translation to PACT allows PAXQuery to parallelize every query execution step with no effort required to partition, redistribute data etc., and thus to scale out with the number of machines in a cluster. The only case where scale-up was not so good is \( q_{14} \) where we used a Cross (cartesian product) to translate an inequality join; an orthogonal optimization here would be to use a smarter dedicated join operator for such predicates, e.g. [38]. Second, we have shown that PAXQuery outperforms competitor XQuery processors, whether centralized or distributed over Hadoop and Stratosphere. None of the competing processors was able to evaluate any of our queries with joins across documents on the data volumes we considered, highlighting the need for efficient parallel platforms for evaluating such queries. ### 7 RELATED WORK **Massively parallel XML query processing.** In this area, MRQL [23] proposes a simple SQL-like XML query language implemented through a few operators directly compilable into MapReduce. Like our XQuery fragment, MRQL queries may be nested, however, its dialect does not allow expressing the rich join flavours that we use. Further, the XML navigation supported by MRQL is limited to XPath, in contrast to our richer navigation based on tree patterns with multiple returning nodes, and nested and optional edges. ChuQL [29] is an XQuery extension that exposes the MapReduce framework to the developer in order to distribute computations among XQuery engines; this leaves the parallelization work to the programmer, in contrast with our implic- ily parallel approach which does not expose the underlying parallelism at the query level. HadoopXML [19] and the recent [13] process XML queries in Hadoop clusters by explicitly fragmenting the input data in a schema-driven, respectively, query-driven way, which is effective when querying one single huge document. In contrast, we focus on the frequent situation when no single document is too large for one node, but there are many documents whose global size is high, and queries may both navigate and join over them. Further, we do not require any partitioning work from the application level. After the wide acceptance of Hadoop, other parallel execution engines and programming abstractions conceived to run custom data intensive tasks over large data sets have been proposed: PACT [9], Dryad [27], Hyracks [16] or Spark [51]. Among these, the only effort at parallelizing XQuery is the ongoing VXQuery project [5], translating XQuery into the Algebricks algebra, which compiles into parallel plans executable by Hyracks. In contrast, PAXQuery translates into an implicit parallel logical model such as PACT. Thus, our algorithms do not need to address underlying parallelization issues such as data redistribution between computation steps etc. which [16] explicitly mentions. XQuery processing in centralized settings has been thoroughly studied, in particular through algebras in [21], [34], [35], [42]. In this work, our focus is on extending the benefits of implicit large-scale parallelism to a complex XML algebra, by formalizing its translation into the implicitly parallel PACT paradigm. As shown by our experiments, even on top of the Hadoop/Stratosphere-based architectures used in the experimental comparison with PAXQuery, existing XML processors [8], [41], [44] cannot scale in the presence of joins across multiple documents of large collections. Our algebraic based approach, instead, allows to delegate much more to Stratosphere system wrt the distributed solution proposed in Section 6, where joins remain internal to the XQuery engine. XML data management has also been studied from many other angles, e.g. on top of column stores [15], distributed with [30] or without [1] an explicit fragmentation specification, in P2P [31] etc. We focus on XQuery evaluation through the massively parallel PACT framework, which leads to specific translation difficulties we addressed. Parallelizable nested languages. Recently, many high-level languages which translate into massively parallel frameworks have been proposed; some of them work with nested data and/or feature nesting in the language, thus somehow resemble XQuery. While PAXQuery’s implementation is specific to XQuery, the concepts shown in this work are applicable to these other languages. Jaql [12] is a scripting language tailored to JSON data, which translates into MapReduce programs; Meteor [26], also for JSON, translates into PACT. None of these languages handles XQuery semantics exactly, since JSON does not feature node identity; the languages are also more limited, e.g. Jaql only supports equi-joins. The Asterix Query Language [11], or AQL in short, is based on FLOWR expressions and resembles XQuery, but ignores node identity which is important in XQuery and which we support. Like VXQuery, AQL queries are translated into Algebricks; recall that unlike our translation, its compilation to the underlying Hyracks engine needs to deal with parallelization related issues. Finally, other higher level languages that support nested data models and translate into parallel processing paradigms include Pig [39] or Hive [48]. Our XQuery fragment is more expressive, in particular supporting more types of joins. In addition, Pig only allows two levels of nesting in queries, which is a limitation. In contrast, we translate XQuery into unnested algebraic plans with (possibly nested, possibly outer) joins and grouping which we parallelize, leading to efficient execution even for (originally) nested queries. Complex operations using implicit parallel models. The problem of evaluating complex operations through implicit parallelism is of independent interest. For instance, the execution of join operations using MapReduce has been studied extensively. Shortly after the first formal proposal to compute equi-joins on MapReduce [50], other studies extending it [14], [28] or focusing on the processing of specific join types such as multi-way joins [2], set-similarity joins [49], or θ-joins [38], appeared. PAXQuery is the first to translate a large family of joins (which can be used outside XQuery), into the more flexible PACT parallel framework. 8 CONCLUSION AND FUTURE WORK We have presented the PAXQuery approach for the implicit parallelization of XQuery, through the translation of an XQuery algebraic plan into a PACT parallel plan. We targeted a rich subset of XQuery 3.0 including recent additions such as explicit grouping, and demonstrated the efficiency and scalability of PAXQuery with experiments on collections of hundreds of GBs. For future work, we contemplate the integration of indexing techniques into PAXQuery to improve query evaluation time. Further, we would like to explore reutilization of intermediary results in the PACT framework to enable efficient multiple-query processing. Acknowledgements. This work has been partially funded by the KIC EIT ICT Labs activity 12115. We would like to thank Kostas Tzoumas and the anonymous reviewers for their valuable comments and suggestions to improve the quality of this work. REFERENCES This is the author's version of an article that has been published in this journal. Changes were made to this version by the publisher prior to publication. The final version of record is available at http://dx.doi.org/10.1109/TKDE.2015.2391110 ["Xqizopen","http://www.axyana.com/xqizopen/"], Jesús Camacho-Rodríguez is a member of the technical staff at Hortonworks. He obtained his PhD degree from Université Paris-Sud and Inria in September 2014, and a CS Engineering degree from University of Almería in 2009. From 2009 to 2011, he was a research engineer at Inria, where his work focused on XML data management in P2P systems. His research interests include parallel and distributed query processing, query optimization, and efficient large-scale data management. Dario Colazzo graduated from University of Pisa and received his PhD from the same university, in 2004. After his PhD, Dario has been research visitor at Ecole Normale Supérieure in Paris, and a post-doc, first at University of Venezia and then at Université Paris-Sud, where he became associate professor in 2005. Since 2013 he is full professor at Université Paris-Dauphine. His research activities focus on safe and efficient management of semi-structured data. Ioana Manolescu received her PhD from Inria and Université de Versailles Saint-Quentin in 2001, after graduating from Ecole Normale Supérieure in Paris. Ioana has been a post-doc at Politecnico di Milano, Italy, then she joined Inria where she is now senior researcher and the lead of the OAK team, specialized in database optimization techniques for complex, large data. Her research interests include algebraic optimizations, adaptive storage and efficient management of semantically rich data.
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01162929/file/TKDE2391110.pdf", "len_cl100k_base": 14868, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 63613, "total-output-tokens": 18043, "length": "2e13", "weborganizer": {"__label__adult": 0.00035119056701660156, "__label__art_design": 0.0005984306335449219, "__label__crime_law": 0.0004394054412841797, "__label__education_jobs": 0.0029144287109375, "__label__entertainment": 0.00016367435455322266, "__label__fashion_beauty": 0.00022232532501220703, "__label__finance_business": 0.0007939338684082031, "__label__food_dining": 0.0004246234893798828, "__label__games": 0.0005888938903808594, "__label__hardware": 0.0011777877807617188, "__label__health": 0.0005388259887695312, "__label__history": 0.0005826950073242188, "__label__home_hobbies": 0.00016486644744873047, "__label__industrial": 0.0008077621459960938, "__label__literature": 0.0005221366882324219, "__label__politics": 0.0004148483276367187, "__label__religion": 0.0005908012390136719, "__label__science_tech": 0.287353515625, "__label__social_life": 0.0002027750015258789, "__label__software": 0.0347900390625, "__label__software_dev": 0.66552734375, "__label__sports_fitness": 0.00022602081298828125, "__label__transportation": 0.0005855560302734375, "__label__travel": 0.0002593994140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65135, 0.01833]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65135, 0.29051]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65135, 0.86214]], "google_gemma-3-12b-it_contains_pii": [[0, 1104, false], [1104, 5699, null], [5699, 8876, null], [8876, 12834, null], [12834, 16074, null], [16074, 20095, null], [20095, 24541, null], [24541, 29323, null], [29323, 34700, null], [34700, 37139, null], [37139, 42746, null], [42746, 47845, null], [47845, 52090, null], [52090, 58398, null], [58398, 65135, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1104, true], [1104, 5699, null], [5699, 8876, null], [8876, 12834, null], [12834, 16074, null], [16074, 20095, null], [20095, 24541, null], [24541, 29323, null], [29323, 34700, null], [34700, 37139, null], [37139, 42746, null], [42746, 47845, null], [47845, 52090, null], [52090, 58398, null], [58398, 65135, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65135, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65135, null]], "pdf_page_numbers": [[0, 1104, 1], [1104, 5699, 2], [5699, 8876, 3], [8876, 12834, 4], [12834, 16074, 5], [16074, 20095, 6], [20095, 24541, 7], [24541, 29323, 8], [29323, 34700, 9], [34700, 37139, 10], [37139, 42746, 11], [42746, 47845, 12], [47845, 52090, 13], [52090, 58398, 14], [58398, 65135, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65135, 0.032]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5e27f74889e8c441b8933a84e1b21c376d47e4be
Bridging User and Developer Communities via Online Platform Ruairí Kell B.A.(Mod.) Business and Computing Final Year Project April 2014 Supervisor: Dave Lewis School of Computer Science and Statistics O’Reilly Institute, Trinity College, Dublin 2, Ireland Acknowledgements Firstly I wish to acknowledge and thank my supervisor for this project, Dave Lewis. Without his support, encouragement and unending helpfulness I would not have been able to complete this project. I would like to thank all my friends and family who have supported me, prayed for me, and shown me tremendous support for this entire year, and throughout my time in college. A special thanks must be given to Suzy Percy, who has loved and encouraged me endlessly through every high and low in this project. I would not have managed to do this project without her support. Finally, and most importantly, I wish to thank God, ever present and always looking out for me over the course of the project, and the reason why I do everything to the standard that I have achieved. DECLARATION I hereby declare that this project is entirely my own work and that it has not been submitted as an exercise for a degree at this or any other university ______________________________ ________________________ Name Date # Table of Contents <table> <thead> <tr> <th>Chapter</th> <th>Page Number</th> </tr> </thead> <tbody> <tr> <td><strong>Chapter 1 - Introduction</strong></td> <td></td> </tr> <tr> <td>1.1 Introduction</td> <td>4</td> </tr> <tr> <td>1.2 Reader's Guide</td> <td>5</td> </tr> <tr> <td><strong>Chapter 2 - Background &amp; Requirements</strong></td> <td></td> </tr> <tr> <td>2.1 Project Background</td> <td>6</td> </tr> <tr> <td>2.2 Background Research</td> <td>7</td> </tr> <tr> <td>2.3 Requirements</td> <td>10</td> </tr> <tr> <td><strong>Chapter 3 - Design</strong></td> <td></td> </tr> <tr> <td>3.1 Shaping the Website</td> <td>15</td> </tr> <tr> <td>3.2 Shaping the Back-end</td> <td>17</td> </tr> <tr> <td><strong>Chapter 4 - Implementation: Part 1</strong></td> <td></td> </tr> <tr> <td>4.1 Wordpress</td> <td>19</td> </tr> <tr> <td>4.2 Languages</td> <td>21</td> </tr> <tr> <td>4.3 Plug-ins</td> <td>22</td> </tr> <tr> <td>4.4 Wordpress Features</td> <td>23</td> </tr> <tr> <td>4.5 Fulfilling Requirements</td> <td>24</td> </tr> <tr> <td><strong>Chapter 5 - Implementation: Part 2</strong></td> <td></td> </tr> <tr> <td>5.1 Front End Full Implementation</td> <td>25</td> </tr> <tr> <td>5.2 Back End Full Implementation</td> <td>33</td> </tr> <tr> <td><strong>Chapter 6 - Evaluation</strong></td> <td></td> </tr> <tr> <td>6.1 Agile Testing</td> <td>36</td> </tr> <tr> <td>6.2 Full Site Testing</td> <td>37</td> </tr> <tr> <td>6.3 Problems Encountered</td> <td>38</td> </tr> <tr> <td><strong>Chapter 7 - Conclusion: Part 1 - Critique</strong></td> <td></td> </tr> <tr> <td>7.1 Fulfilling Project Aims</td> <td>40</td> </tr> <tr> <td>7.2 Critique</td> <td>41</td> </tr> </tbody> </table> • 7.3 Personal Development Chapter 8 - Conclusion: Part 2 - Future Work • 8.1 What Could Be Improved? 43 • 8.2 What Could Be Done Differently? 43 • 8.3 What Scope Is There For Future Developments? 44 Bibliography • References 45 • List of Websites 45 Appendices • 1: Original Idea 47 • 2: Original Draft Requirements Document 47 • 3: Complete Terms and Conditions from Site 50 • 4: Screenshots 53 • 5: List of Users and Passwords for Access 54 Chapter 1 - Introduction 1.1 Introduction 1.1.1 Distinctions In order to begin the discussion of this project, *Bridging User and Developer Communities via Online Platform*, first a brief description must be offered of what this project is about. Therefore, a distinction should be made as to what is meant by *User* and *Developer*. For this project, a User is somebody who uses code and software etc. online or locally, and can have some or no experience with developing code (i.e. they are not a regular programmer, developer, or software engineer). A Developer is someone who has spent a lot of time using, creating, editing and updating code and software, and as such is very experienced in that area. 1.1.2 Brief Description The idea of bridging the User and Developer communities is to allow them to have better communication than is currently in place. The concept is simple - to provide that platform for them. From this, a Wordpress online platform has been taken in order to build a website, allowing Developers to upload their code online, create discussion around this code and (unlike other sites online) allow easy access for Users, without needing to entirely integrate and immerse Users in a Developer-centric site. The gap in existing systems here is simply this: there are no sites that cater for User discussion, queries, and information that comes directly from Developers who not only use the site, but developed the content that exists upon it. For the Users, they are able to comment and discuss with little to no restrictions or obligations to the Developer community, allowing them to interact only as much as they want or need to. The original plan for the project was to focus specifically on a plug-in/s that would facilitate this *(see Appendix 1)* however, the focus then shifted to a self-contained model that would allow for creation, uploading, and discussion all in one place, that would still cater to both Users and Developers (however there is still quite a large use of plug-ins on this website, in order to aid the project's aims as a whole, as well as for numerous other reasons). As such, the website (called ConnectIt) can be found at http://ruairikell.com/ and can be used for all that is mentioned above. 1.2 Reader's Guide It is recommended when reading this report that the ConnectIt website is kept open for reference. In order to make use of any of the website’s features that require logging in, it is requested that one of the Usernames and Passwords found in Appendix 5 is used. This report shall read as follows: - **Chapter 2** will discuss the background and requirements for the project, including motivations, research and aims. - **Chapter 3** will discuss the way in which the project has been designed, tackling both the front-end and the back-end. - **Chapter 4** will discuss the primary implementation involving setting up the project, the specific aspects to be included and languages used. - **Chapter 5** will discuss the full implementation including the coding completed, and steps taken to achieve the aims of the project. - **Chapter 6** will discuss how the project was evaluated in terms of testing and the problems encountered. - **Chapter 7** will discuss a critical conclusion to the project, what was achieved, and personal development. - **Chapter 8** will discuss how the work could be improved, and what scope there is for future development. Chapter 2 - Background & Requirements 2.1 Project Background 2.1.1 Primary Motivations There were a lot of motivations behind this project and the ideas that it embraces. First and foremost was the fact that there lacked an entire, cohesive, all-encompassing scene online that allowed for communication between developers, while also allowing for easy and simple access for those outside of the Developer community (i.e. Users). To put this in clearer terms, Users would have had to create an account, for a site aimed specifically at Developers, and engage with a Developer only community, in order to get responses to their simple questions and queries (please see Background Research section 2.2 below for further explanation on these sites). Also, the Users may be dissuaded by a Developer centric forum, as it could appear quite intimidating should they not be technically minded. Not only this, but by going to a site that is not the place that they encountered whatever problem they have, they also move away from the community on that site who may have been using the same software etc. All of this has created a virtual rift between two communities, allowing Developers to thrive amongst their own - other developers - while restricting Users from gaining access to this hub, and as such limiting their discussions (however unintentional this may have been). From this, it was felt that there was a problem that needed to be addressed, and that the solution to this was to build some kind of bridge between the two distinct communities. As will be seen, the solution ended up being achieved through a simpler concept than it first appears was needed. 2.1.2 Secondary Motivations A secondary motivation for the project had to do with the idea of doing a final year project itself, and personal motivations. As a student of Business and Computing, I have a unique perspective on approaching projects (with skills from both areas crossing over). However, this also offers some limits with the amount of programming and coding that have been experienced. Taking both of these aspects into account, as well as the primary motivations mentioned previously, it was felt that a web-based project would be perfect (even though it involved learning new languages and technology), and also that it would serve a business- type purpose - in that it would be existing to fill a gap in the market and cater to those consumers’ needs. 2.2 Background Research In order to prepare for this project, and as a result of the motivations, research had to be conducted into what options were already out there. Not only was this to find where the project would have its particular niche market, but also in order to see what similarities could be drawn from, changed and adapted in order to aid it. 2.2.1 Developer Research Research began by looking at some sites that offered discussion for Developers. The three main sites that were looked at for this section were: GitHub, Stack Overflow, and Sourceforge. There were also some plug-ins that focused more specifically on discussions and were included in this research too: Get Satisfaction, UserEcho, and Uservoice <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>GitHub</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>Stack Overflow</td> <td>Yes</td> <td>No</td> <td>Yes</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>Sourceforge</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>Get Satisfaction</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>UserEcho</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>Uservoice</td> <td>No</td> <td>No</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> </tbody> </table> The table uses the term "acceptable" for each column. What is meant by this is the question - is this feature at a level that allows for ease of use of itself, and is the standard of that feature of an appropriate level to fulfil the relatable requirement that might be had for this project? To explain the above table further, GitHub is perfect for Developers, as it allows for easy upload, storage and access of code repositories, as well as providing a commenting feature available for other Developers on the site. It does not however make it easy for a User to ask questions on code they may have found on this site (perhaps through a third party search engine), as they would need to break into that community by creating an account for a site they might never use again, as such pushing them away. The same can problem can be said for Stack Overflow. On this site, there is slightly better access (e.g. login with Facebook accessibility), however, the front page clearly states that it is only for programmers, "Stack Overflow is a question and answer site for professional and enthusiast programmers.", pushing away those who are just using the code. Added to this, the site does not have a specific upload feature, but rather relies on the copying/pasting of certain lines of code. This can push aside those who wish to showcase the entirety of their project or code too. Sourceforge is a slightly different type of site. Rather than code being available, it is software - entire projects. It is open source, and easy for Users to access, but in order to create discussion around the software, one must create an account, again for a site they may never use again. This appeared to be a running theme and one which was a problem that needed to be solved. The three discussion sites/plug-ins turned up trumps in all three discussion related columns in the table. However, because they are not integrated with any particular site by default, they lack the other features. Added to this, is that they all require payment to unlock the majority of their potential (even up to $1200 per month for a professional package), and they are for organising a community that sits somewhere on top of where the code/software would be, but not integrated directly into it. 2.2.2 User Research As previously mentioned in 2.2.1, discussion for Users was not happening with full integration on any site researched. However, it was still necessary to research some secondary aspects (i.e. not full requirements), and one which was prevalent in my mind was new Users wanting to learn how to code through being gradually exposed to more and more code. While ConnectIt would be a forum/space that allowed for discussion rather than teaching, there would be an aim that those who were exposed to this, would eventually use it as a resource. Therefore, it was important to look at other resources for learning coding. The first site to be examined was Codecademy. This site offers free, interactive learning for those wanting to create code and learn how to program. For this site there is a strong emphasis on community ("Join the Community" being one of their three core principles), however it appears to be aimed more at connecting with friends and groups, rather than a generic discussion forum or thread. The unfortunate problem with this is that the more experienced Developers would not be in those groups, and as such the Users would not be getting the same level of help. The second site to be examined was CoderDojo. This site does not offer online learning, but instead directs a User to their closest "dojo", where they can get help learning how to code. This is included here because despite being a physical presence - as opposed to a virtual presence - it does offer discussion with more experienced Developers, as well as with others learning to code. The only downside to this is that the pool of resources (i.e. Developers) does not stretch as far as an online/virtual pool potentially could. The final site to look at was W3schools. They offer a lot of tutorials, for all kinds of web development languages. The site also offers forums for discussion, which are very well used, however they are on a completely separate part of the website to the tutorials, and this may be off-putting to some Users who are only learning to code. As with the other two sites above, registration is also needed to facilitate discussion. While this is only a small sample of code-learning sites, it shows the common themes that runs in all similar sites. The lack of integration and ease of use, as well as other downsides mentioned above are issues that can hopefully be addressed by ConnectIt in the long run. 2.3 Requirements Following from the research that was done, requirements needed to be drafted. This meant deciding what features the project should contain, what issues it should address, and how the project should function overall. Initially, a requirements document was formed (along with UML diagrams), that reflected what was felt would be the final direction of the project. However, due to the changing nature of this type of project, a complete revised draft of requirements document had to be created [Please see Appendix 2 for the original draft Requirements Document]. The first step was to build UML Use Case diagrams in order to begin to outline exactly who would require what. 2.3.1 UML Use Case ![ConnectIt System UML Diagram] 2.3.2 Introduction To begin this project, a decision had to be made on what methodology would be best for use, and for this project the choice came down to either using a "waterfall" style approach, or using an "Agile" style approach. (Base36, 2012; Mikoluk, 2013) While it is true that a choice must be made in most cases, it is entirely possible to use aspects from both of these styles. A lot of Agile aspects will be in place as numerous iterations of the process will be used in order to further the development of the project. However, there will still be more of a focus on one particular aspect each time around, following with a waterfall approach. For example, the project will have five phases, with phase one being Requirements, phase two being Design, phase three being Implementation/Development (coding and testing), phase four being Evaluation, and phase five being Launch (final testing and "okayings" the project). This would follow with a *waterfall* approach. However, during each of these stages each of the other aspects would also be incorporated, e.g. doing building and testing right through phase one, two, three, and four. This would follow with an *Agile* approach of testing as the project develops through iterations. ### 2.3.3 Functionality This section is to address what exactly the end product should be able to do. The end product must by definition be accessible to both Developers and Users. As can be seen from the UML Use Case diagram (2.3.1), this means that it must contain the following: 1. Discussion functionality for both User and Developer 2. Ability for Developer to upload to their content 3. Ability for Developer to create a post around their content 4. Ability for Developer to edit said post 5. Login capabilities for Developers 6. Linking capabilities for Developers 7. Moderator capabilities to edit any/all posts or discussions 8. Moderator access to manage database 9. Moderator access to manage emails An extra addition to these requirements taken from the UML would be what was discussed in 2.2.2 - regarding the entire site, over the course of time, that Users would gradually be exposed to more and more code, helping them to learn code too. This is not a primary requirement per se, but is a natural progression as a result of the nine points above. 2.3.4 Software/Hardware In order that this should be able to develop this properly, it had to be decided what would be used to develop the project. In terms of hardware, there is nothing needed for this other than a laptop. For software, there is a little to be discussed: - Using Notepad++/online editors to write any code. - This allows for simplicity in writing all code without any unnecessary features. - An IDE (Integrated Development Environment) such as Netbeans was considered, however the goal was to keep everything as simple as possible, and due to the tools provided within Wordpress and the web hosting control panel, the simplest option was seen to be the best (Occam's Razor). - Plan to develop as a Wordpress site, minimizing the amount of coding necessary - Wordpress has a clear plug-in architecture that is of utmost importance to this project. - Wordpress is open source, and numerous web based languages can be used inside and in conjunction with it. - Wordpress is far more widely used than Drupal or Joomla (Website Set-Up, 2014). - Most development should take place using PHP, HTML, and some JavaScript - A liberal use of Wordpress plug-ins is necessary for: - Allowing PHP/JavaScript functionality. - Allowing Developers to post via plug-in. - Creating forums. - Use of an online web-host and control panel/database at AgilityHoster. Having never used Wordpress, online web-hosting/control panels/databases, PHP, or JavaScript before, it was of utmost necessity to put in time to learn all of these in advance of beginning to build the project. This ended up being a big time consumer for the project. To expand: - Time was spent time downloading, installing and getting used to Wordpress, as described in 4.1.1 • Self-teaching of the languages was done by using W3schools tutorials. This covered completing the HTML tutorials, and doing basic PHP and JavaScript tutorials (and returning to them over the course of the project as needed). • However, all of this took quite a while due to the steep learning curve, previously unknown syntax and unforeseen time constraints. 2.3.5 Performance As a Wordpress site, it is relatively lightweight. There would have been performance issues with free-web hosting speeds and storage (as well as limits on access to certain things), so paid hosting was opted for. As such, the performance required from this project should pose no problems at all. 2.3.6 Considerations For this project, there are a number of things that need to be taken into consideration, such as: • Steep learning curve for software/applications being used o This is explained above in 2.3.4 in more detail • Security for Users/Developers o While there are numerous third party applications being used in the development of this project, and they all have their own levels of security, it is important to make sure that all of them work to an acceptable standard, and cover all the security that would be expected of any high-level site. • What testing should be done? o For a site like this, a vast amount of testing is done as each separate part is being developed. This ties in with the Agile approach to the project (2.3.2). o Any extra testing on top of this may require a User sample and thus more time management, coordination and ethics forms. o If not the above, then extra testing involves a full comprehensive run through of all possibilities for ConnectIt. • How much maintenance will be necessary? o There is a possibility that most of the site will be self-maintaining or maintained by those registered to the site. o A skilled administrator/moderator is mandatory. • Legalities that need researching/Terms and Conditions of Use. o See Appendix 3. 2.3.7 Limitations/Possible Issues Some of the issues that may be faced, and limitations that exist are: • Limited amount of time • Scope creep • Testing and maintenance • Usage of any new software/languages Chapter 3 - Design This chapter will deal with shaping of the web platform, and show exactly how all features can be implemented on ConnectIt, as well as the specifications surrounding them. 3.1 Shaping the Website This section focuses on the design of visible and usable elements that will be included in the front-end of the website. 3.1.1 Pages There will need to be a certain amount of pages, each displaying different types of content relevant to Users and/or Developers. - A "Home page"/"Splash page"/"Welcome page". - An "About" page describing what the aims of the site are, who created it, and what it can be used for. - A page that explains to those accessing the site the difference between Users and Developers. - A page for submitting posts, as well as uploading content and details about registration. - A posts feed. This would be a page that would contain all of the posts (probably in chronological order). - A contact page for contacting the moderator/administrator. - A page that outlines the Terms and Conditions of using the site. This page must be accessible and visible to anyone who accesses or uses any part of the website. 3.1.2 Posts There must be a few options for the posts too: - They will have their own page/feed (as in 3.1.1). - They will be able to be accessed via this feed, and also the search function. - Default sorting will be chronologically, however they can be filtered by the search function, by author, or by category. • Posts can be created by the Developers using the content that they have submitted. One way of doing this will be by editing the Press This plug-in (see 4.3.3), so that they can access it via the sidebar. • The "edit" function will be enabled for authors on their own posts (i.e. they will be able to see it beside each of their posts). 3.1.3 Discussion The discussion will come about in three different forms: Comments, Facebook Comments, Forum Discussions. • Comments will be set so as to allow anybody to comment as long as they have a valid email address. • To avoid spam (although there is a spam filter plug-in activated), anyone commenting must first have a previous comment approved. • Users will also be able to log in using their Wordpress account and comment with that. • Facebook commenting will be enabled for posts too (if a User does not wish to give their email, it would defeat the aims of the project to dissuade them). • Forums operate with the same security parameters as the comments. • Forum topics can be created by anyone with a query (again so as not to dissuade Users from asking their questions). 3.1.4 Sidebar The sidebar will exist to provide easy access to some of the most used functions on the website. • Login and Registration links, for those who need to quickly login and those who haven't yet registered. • The aforementioned (3.1.2) modified Press This plug-in. This will allow for quick creation of a post by those registered for the site. • Links for the most recent posts. This is important to keep everyone as up to date as possible. • Links to forums (as with the posts). 3.1.5 Footer The footer will exist to provide access to some of the lesser used functions on the website. - Terms and Conditions page. This will be visible on the footer of every page, thus showing anyone who access what their rights and responsibilities are. - Categories of different types of posts. - A calendar of all posts created (for a particular month). - The "Meta" section. This will include quick links for the administrator to access what he/she needs. 3.1.6 Header The header will be the navigation bar for ConnectIt. It will contain links for all of the main pages (3.1.1) as well as a search function implementation. 3.2 Shaping the Back-end 3.2.1 File Management File management is important to this project, because it allows an administrator to access, update, edit, change, fix, add and delete all files associated with the system, whether they are visible on the front end or not (2.3.3 point 7). As such, through the lightweight file manager within the control panel, access is available to all the files contained within the standard Wordpress install (and subsequent changes via Wordpress Dashboard front-end). Through said file manager, it is possible to also add extra files that will be needed for anything extra to be added to a Wordpress install. In this case, that will be PHP scripts that must be included in order that they can be called from the front-end when needed. Extra folders can also be created in order to store files that are not directly associated with Wordpress. To expand, this means that content that is uploaded can be directed to be stored in a separate folder, and pulled from there once the content is needed again. This ties in with some of what this project will seek to achieve. 3.2.2 Database Management Database management is another important aspect of the project. This is because it may be necessary to edit, change, fix, delete, update and access details from the back end (i.e. not via Wordpress). These details can range from registered User details, to different posts and pages. The control panel offers links to the phpMyAdmin tool for handling the administration of the MySQL database (2.3.3 point 8). From here, Users can be deleted, settings for posting can be changed, comments can be edited and other important functions. This would be very useful for instance in the event that a User lost their login details, or a post had to be removed remotely for example. All in all it is a very useful tool. 3.2.3 Email Another useful tool provided for dealing with options in the back-end is the email functionality (see 2.3.3 point 9). The web hosting plan includes up to 1000 email addresses for the domain name (e.g. admin@ruairikel.com). A third party email service, Roundcube, is a tool provided (through the control panel once again), in order that emails can be sent and received. This can be done by accessing Roundcube directly, or through the email functions provided through the Wordpress dashboard. Chapter 4 - Implementation: Part 1 This section of the report will deal with how the project was set up to be built - following from Chapter 3. Focus will be on the software set-up, language usages, development of plug-ins, specific features for the website, and tackling all the initial aims set out in the above requirements documentation (2.3). 4.1 Wordpress 4.1.1 Download and Local Host Test To begin, Wordpress was downloaded from the website and installed on a laptop, using a local host through XAMPP. This means that the site was viewable as if it were live from the local machine, but it was not actually online. This also meant that it could be controlled through locally stored files and the XAMPP control panel. The reason this was done was in order to become more familiar with using Wordpress, and as such be prepared for setting up online. 4.1.2 Move to Online Hosting Once this was done, the natural progression was to move to online website hosting. After minimal research, AgilityHoster was chosen. This was the obvious choice for a number of reasons: it had an automatic Wordpress install, fees were cheap, and the control panel interface was easy to use. ![Hosting Tools](image) After much deliberation, and assessment of what was required, it was decided to opt for a year's hosting plan at $9.99. This included the necessary: unlimited disk space, unlimited traffic, 2 domains, 5 subdomains, 7 MySQL databases (3.2.2) and 1000 email address (3.2.3) (2.3.3 point 8, 9). Added to this, it also allowed for editing of the `php.ini` file, which would definitely be needed later on when PHP would be used for the project. To finish off setting up the hosting, the domain [http://ruairikell.com](http://ruairikell.com) was registered, and Wordpress was installed for this domain. 4.1.3 Advantages of Using Wordpress As have been previously mentioned, there are numerous reasons for using Wordpress for this project. A major reason is how it allows for the use of plug-ins (explained further in 4.3). This is a distinct advantage. Added to this, it cuts down on the amount of coding necessary to build a website (the aims of this project are not to build the website, but to create a space that bridges User and Developer communities, and as such many hours can be saved by using a Wordpress platform). It is also so widely used, that the interface will be relatively familiar to many Users. **4.2 Languages** Certain languages would have to be used to code in, in order to create the features required for ConnectIt. **4.2.1 HTML** HTML is key for building any website as it is the basis of everything that we see (of course this is tied in with CSS formatting). However, it is dually important in Wordpress, as there is built in functionality for creating posts or pages that include HTML. This can allow for the creation of forms etc. *within the posts on the site itself*. This would be important for the site as it developed. **4.2.2 PHP** PHP is a server-side scripting language (PHP, 2014), which is what was needed in order to access the servers from the Wordpress site. In order to allow PHP work within Wordpress there had to be some adjustments made front-end and back-end. The front-end involved a plug-in to allow for PHP inside Wordpress (see 4.3.1 for more details). The back end involved editing the `php.ini` file supplied via the control panel on AgilityHoster. In order to be able to push content to the servers and to pull that content again, `fsockopen()` must be enabled within this file (this is disabled by default, and thus supports the reasoning in 4.1.2 that access was needed to the file). PHP would also be used as directly uploaded files to the server (that could then be accessed through the front end). The web hosting file manager allowed very well for this, and also allowed for online editing of those files, which was important. 4.2.3 JavaScript A third language that was needed was JavaScript. This too is not automatically supported within Wordpress and ergo needed a plug-in to work (see 4.3.2 for more details). JavaScript would allow for certain plug-ins to be then integrated into the website. These plug-ins would make the site easy to use and also allow for the integration of JavaScript where it would not otherwise be able to go. 4.3 Plug-ins As has been mentioned above, there were plans for a lot of plug-ins to be included in this site for various reasons. Here is an analysis of those that were installed. 4.3.1 Allow PHP in Posts The title of this plug-in is very self-explanatory. It allows for PHP code to be run from inside a Wordpress post or page. Instead of writing it traditionally `<php ... ?>` the plug-in allows you to write it as `[php]...[/php]` and translates this to traditional format itself. This plug-in will allow for those creating posts to pull their content from the website directly to the post for display. This is a significant part of what will be in play for the completed website. 4.3.2 HTML JavaScript Adder The HTML JavaScript Adder is a plug-in that allows the creation of widgets anywhere on the Wordpress site. This means that it can be used to display other plug-ins for use by any Users on the site. Particularly it will tie in with the Press This plug-in (4.3.3) to allow others to use it. 4.3.3 Press This Press This is a bookmarkable plug-in used within Wordpress for creating quick posts from any webpage on the internet. It normally directly pulls links from the website currently being viewed, and is normally only available for the administrator. However, it can be tweaked (with the help of JavaScript, 4.3.2) so that registered users of ConnectIt will be able to use it to create posts of their own with their content (2.3.3 point 3). 4.3.4 bbPress bbPress is a plug-in that allows for the creation of forums within Wordpress. This will not be edited from what it supplies, as the software is all-encompassing and it helps to facilitate one of the main aims of the project - to have discussion (2.3.3 point 1). This is vastly important. 4.3.5 Other - Acutnetix Secure Wordpress - built in Wordpress security - Broken Link Checker - checks all links are working on the site - Contact Form 7 - allows for simple contact form creation - Akismet - used to help block spam comments 4.4 Wordpress Features There are also additional features of Wordpress that will be put to good use for this project. 4.4.1 Posting Posting has already been mentioned above, but it is important to clarify what exactly it is. Each Wordpress site can allow for registered users to create new posts on a site (akin to blog posts). This is what is envisaged for Developers on the site (2.3.3 point 3). Therefore, it is a feature that will be tapped into and used to its fullest potential. 4.4.2 Commenting Commenting is another important feature **(2.3.3 point 1)**. It is important because it helps tie into the aims of the project for facilitating discussion. The important part about Wordpress commenting, is that it can be edited so no sign in is necessary (just a valid email address), which is something that has been mentioned above as important for the **Users**. 4.4.3 Sidebar/Footer/Header By editing the widgets that come with a Wordpress standard set-up, it is possible to create very professional sidebars, footers and headers **(3.1.4, 3.1.5, 3.1.6)**. A lot of the information, links and links to specific pages etc. for the site will contained within the sidebar/footer and the header will be a navigation bar for ConnectIt. Thus, it is important to make the best use of all of these features too. 4.5 Fulfilling Requirements It is important at this point to note why the **Design** and **Implementation** aspects are being discussed in such detail here and Chapter 3. This is because it is imperative to show that it will fulfil the requirements set out above in **2.3** (with specific reference to 2.3.3). As the design is set-up as per this Chapter (4) as well as somewhat in Chapter 3, it should allow for all aspects of those requirements to be realised. It is also important to take note that all of these software choices were made through careful consideration with the forethought that they would be the most beneficial to completing the aims of this project (their benefits being shown throughout this chapter). However, were somebody else to tackle this same issue, they may implement a similar style approach using different technology (for example using AJAX [Kyrnin, 2014] instead of PHP, using Joomla & Drupal instead of Wordpress, or choosing different plug-ins). Chapter 5 - Implementation: Part 2 This Chapter will take from all that has been outlined in Chapter 3 and 4 regarding the initial design and beginning of implementation, and show how each part was then further implemented and specific features completed. 5.1 Front End Full Implementation This section deals with the front-end of the website. The dashboard is used for the entire build here, and plug-ins are woven throughout all features to be discussed. This will be shown clearly within this part (5.1). 5.1.1 Dashboard The Wordpress dashboard is a place where the creator can login to change front-end settings, create posts, pages etc. and monitor Users contributions (as well as approve, delete and edit comments/posts). The files responsible for the appearance of the site can also be edited from the dashboard, as well as management of all plug-ins (installing and setting editing). For 5.1.2 - 5.1.7 the dashboard's many functions were used in order to build and change the front end. 5.1.2 Pages Following from 3.1.1, there were numerous pages to build. Some of these were simple information pages, while others required more information or a more technical aspect that needed to be included. - The first page visible to visitors was to be some kind of splash page. Here, a simple Welcome Page was opted for, that has two lines explaining what ConnectIt is, and how to get started with the site. This required some simple text, and some links to relevant pages. - The next page was a Posts page. Within Wordpress settings, this was made to be the default page for posts to be placed in a chronological form (see 5.1.3 for more details). The third page was one of the most important to create for ConnectIt. The Submit page involved creating a HTML form to create uploading buttons that in turn called from a PHP upload script from the back-end (see 5.1.3 for full details!). It also contained a list of instructions on how to upload, create posts and submit them to the site (as well as simple registration/login instructions). The text here took some refining in order to keep it as short as possible with as much information as necessary. All of the above pages were featured in the header. The last pages to be featured in the header were informational pages: explaining the difference between Users and Developers, explaining the features contained in the site, and a contact page for anybody to use. A final page to be included was the Terms and Conditions page. This was to be featured on every page able to be accessed on the site and as such was included in 5.1.3 Uploading and Creating Posts As mentioned in 5.1.2 (and before), posts need to be created by those registered for the site. Thus, the Submit page was created. As previously mentioned, this involved a HTML form created: ```html <html> <form action="upload_file.php" enctype="multipart/form-data" method="post"> <label for="file"></label> <input id="file" type="file" name="file" /> <input type="submit" name="submit" value="Upload" /> <p style="padding-left: 30px;"> </p> </form> </html> ``` The call to the PHP upload script "upload_file.php" will be discussed in detail in 5.2.1. However, this is an example of what it returns: ``` Please copy the following text: [php] echo file_get_contents('upload/htmlform.txt');[/php] Click here to return to Submit Page ``` The text that is copied acts as explained in 4.3.1 and will echo the contents of the uploaded file. The next step in submitting is to put this copied text into a post. The way in which this is done is by pasting it into the modified Press This plug-in (see 4.3.3) (located in the sidebar). This was modified by using HTML JavaScript Adder and adding it as a widget to the sidebar: This pop's up a new Press This box, that is completely blank (unlike a normal Press This), into which the copied text is inserted, as well as any other details, such as title, links to code repository, back story for the code or whatever the poster decides to include: From this, the Developer will get the contents of their file - i.e. the code - and everything else they have added, in a post. They then have the ability to edit/delete this post when necessary, as well as being notified by email when a comment is made on their post. 5.1.4 Discussion Above, there are three different forms of discussion that have been decided on to include for this site and its purposes (see 3.1.3). To reiterate, these three are: Comments, Facebook Comments, and Forum Discussions. Following from 3.1.3, these discussions must be facilitated. The easiest of these to set up was the comments section. In order to follow with the aims of this project, and not dissuade the Users from interacting and contributing to the discussion, it was set - via the dashboard - so that all that was necessary for the User to comment was an email address and to input a name (this does not have to be a full name necessarily). They could also use their Wordpress login if they had one too. Following from this, it was decided that these Users must have one previously approved comment on the site (verifiable by email or by Wordpress details) in order to comment. Once their first comment had been approved by the moderator (i.e. not spam!) they could freely comment on any post that they desired. The second implementation was that of the Facebook comments. This was set up in case a User was not comfortable giving their email address, and if they were already logged in to Facebook, they could simply type and post their comment with no fuss. To attempt this first, two different plug-ins were tried - LoginRadius and OneAll Social Login. These would integrate Facebook comments (as well as other social media websites) and allow Developers to login via those platforms rather than providing an email address. However, when these were set-up and put in place, the login did not work for either plug-in. After troubleshooting, it seemed that the problem rested in the *php.ini* file residing within my control panel, and some settings had to be changed within that. After tweaking those settings though, nothing changed, and it was decided to give up on LoginRadius and OneAll and go direct to Facebook commenting. Firstly, two pieces of code were placed in the *header* file, and the *comments* file respectively for the Wordpress installation: ``` <script>(function(d, s, id) { var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id; js.src = '//connect.facebook.net/en_GB/all.js#xfbml=1&appId=624693737579931'; fjs.parentNode.insertBefore(js, fjs); })(document, 'script', 'facebook-jssdk'));</script> ``` ``` <div class="fb-comments" data-href="http://ruairikell.com" data-numposts="10" data-colorscheme="light"> </div> ``` This now meant that a box for Facebook commenting would show up at the bottom of the page too. The third part of the discussion is forums. These are powered by a great Wordpress plug-in called bbPress (see 4.3.4). As with the comments, it was not desirable to have anything that could possibly exclude Users from the discussion, so therefore it was set so that there would be no registration necessary to: comment on the forum, create a new topic on the forum, or create a new specific forum. Again, the forums are moderated. 5.1.5 Sidebar The sidebar (as in 3.1.4) contains a lot of the most used functions on the website. This means that they are visible at all times, and easily accessible. As such there is a Get Started section and this contains links to both login to the site, and register for the site (the login and register are both dealt with by Wordpress, and operate on very simple email/username/password terms). There is also a Create New Post section, into which was placed the modified Press This plug-in (see 5.1.3). Then there are two sections covering generally well used links - Recent Posts and Forums. 5.1.6 Footer The footer contains four things as seen above. A simple calendar widget has been placed here for quick easy viewing of when something has been posted. The same can be said for the Categories. The Meta exists for ease of use for the administrator. There is also the permalink to the Terms and Conditions as outlined in 5.1.2. 5.1.7 Header Finally, we have the header, which is the navigation bar for the site. This has links to all the pages (as covered in 5.1.2). It also contains a slide-out search plug-in, which allows anybody accessing ConnectIt to search the entire site's contents. 5.2 Back End Full Implementation While most of the work for this was done in the front-end, as shown in section 5.1, there was still some work to be done in the back-end, via the control panel or otherwise. In this section, that work will be discussed. 5.2.1 Upload Script and File Storage To relate back to 5.1.3, there was a need for a PHP script that could upload files to the server. In order to achieve this, the first step was to create a new folder in the file manager, called "upload", and put it inside the folder for the website (i.e. the website folder called ruairikell.com). Then a file called upload_file.php was uploaded into the main directory folder for the website (to allow it to be accessed by the HTML form in 5.1.3). To explain the code: when this file is called (in this case, by the HTML file in 5.1.3), it will first check if the chosen file is a text file (by checking against the allowed file extensions), and if it is under 1mb (by checking against the file size limiter). If this is true, and there are no other standard errors, it will check if the file already exists. If this too is true, it will then upload the file, remove all punctuation and spaces (as it was found that these could often cause parsing errors), and move the file into the "upload" folder that had been created earlier. Then it would echo the text that was above (5.1.3). If the file is over 1mb, not a text file, or already exists, the correct error will be shown instead of the text. By storing the file in the "upload" folder, it can then be accessed from the Wordpress post creator via PHP code (the copied text in 5.1.3). This is allowed in Wordpress only because of the installation of Allow PHP in Posts plug-in (discussed in 4.3.1). That means that the content can be pulled from the server to that post. 5.2.2 PHP.INI File In theory this all should have worked fine as it was. However, it was throwing up errors within Wordpress that had to do with the file_get_contents() part of the PHP code. Troubleshooting resulted in finding out that for this to work, fsockopen() has to be enabled in the php.ini file contained within the web hosting control panel. Therefore, it was necessary to go into that file and edit it so that this was enabled. After this was done, the errors stopped occurring. 5.2.3 Emails A final, minor part of implementing everything in the back-end was to enable emails. This would allow for user registration, emails sent to the administrator via contact form, and any other emails (automatic or user generated) that needed to happen (see 3.2.3 also). This was easily completed through the control panel, and the email system set up (with Roundcube). This all went without problem, and was a great bonus to the project. _A larger selection of screenshots from across the website can be found in Appendix 4._ Chapter 6 - Evaluation This chapter is in relation to how the site was evaluated. This will be looking at how the site runs overall, as well as the testing that was performed on it. 6.1 Agile Testing As mentioned previously (2.3.2), the project was to take a part-Agile approach, and as such, testing would occur as each iteration of a new piece of the site was completed. 6.1.1 Beginning the Project The beginning of the project was not too difficult to assess. Simply put, the setting up of Wordpress, web-hosting, database etc. was done on a trial-and-error basis. While there was a learning curve with the new software, after spending the time to work out issues such as where to upload files, or how to edit themes, the project was set-up and eventually (after much tweaking) the platform ran smoothly. 6.1.2 During the Project During the project, testing had to be continuous. As each new aspect was added it had to be tested to see if each part worked. If it did not, a new iteration occurred, with some changes. To give two examples: - The *Press This* modification: - Firstly it was attempted to copy and place it directly into a sidebar text widget. This did not work when tested, instead returning the code, so then it was placed inside a specific JavaScript widget on the sidebar. This managed to run when tested, but it was not yet modified for the purposes of the site. The code was then edited, saved, and tested again, until it worked. - The PHP upload script: - This was first tested simply to see if it uploaded. It was tested by checking the "upload" folder. This worked, and then followed numerous iterations, each with different added lines of code: limiting by extension, limiting by file size, checking if it already existed, removing punctuation, echoing back text. Most of these ran smoothly, with the exception of removing punctuation, as that removed the full-stop before the extension, which was not desired, so that required a new iteration. This meant after all of this, the code was fully tested. 6.2 Full Site Testing Come the end of the project, the end product had to be tested in its entirety. This meant deciding how to evaluate it, as well as evaluating comprehensively. 6.2.1 External Evaluation The first option was to get it evaluated by those external to the project, i.e. through a sample group of "customers", let them rate the site via a number of metrics, and provide feedback. This was rejected for a number of reasons: - Due to time constraints, setting up testing, finding subjects, and passing ethics approval forms, it would have taken too long and hindered the development of the project. - There was a possibility that the lack of familiarity with the site would mean that they would miss something out. - The test subjects would not necessarily be from the groups that this project is targeting. 6.2.2 Internal Full Site "Run-Through" Therefore, it was decided to test it internally. While this did mean that there would be no "customer feedback", it did allow for a complete familiarity with the site, an ability to make the needed changes on-the-go, and no time constraints save the time set aside for testing. The site "Run-Through", consisted of a number of checkpoints: - Check that every page on the website is coherent and accessible - Check that the Submit page does exactly as it says - Check that discussion is available through: - Commenting logged in Commenting not logged in Facebook commenting Forum topic creation and replies - Check that posts can be edited and deleted both by an external registered user, and the administrator - Check that emailing works - Test the contact form - Test the search function ### 6.3 Problems Encountered For both the Agile testing methodology, and the full site test (as well as after this), there were of course problems encountered. #### 6.3.1 Coding Errors One of the common problems with any project is the errors that occur within the code. This happened very often over the course of this project. However due to the Agile approach that was in place, it did not cause any hindrance other than time consumption. These coding errors ranged from incorrect formatting, to parsing errors, to compiling errors, to server restrictions. Fixing these required a vast amount of effort over the course of the project, yet it was necessary and worthwhile in order to achieve a finished working product. #### 6.3.2 Site Restrictions There were also some restrictions that happened during this project. These ranged from not being able to access the `php.ini` file (thus restricting PHP pull requests), to firewalls blocking incoming RSS feeds (this was going to be a simple aesthetic addition to the site). While not all of these were resolved, and while they were again time consuming, they were hurdles that could not simple be ignored, and had to be overcome for the sake of the project. #### 6.3.3 Spam One major issue that was not foreseen was that of spam. The security measures put in place by Wordpress do a wonderful job blocking spam from comments. Added to this, one must have a previously approved comment in order to be able to comment without moderation. Therefore, comment spam was able to be dealt with efficiently. It also appeared, after many weeks of the site being live, that there were no other spamming problems. However, on one recent day, there began to be a lot of spam users registering and creating posts. This was unexpected as it should have been dealt with by Wordpress, and had not been a problem in the past. **However, with no current solution to this recent problem, registration has been closed on the site, and as such it is requested that if you wish to log in and use the site, you do so with one of the usernames and passwords supplied in Appendix 5.** Chapter 7 - Conclusion: Part 1 - Critique The first part of the conclusion for this project analyses and critically assesses the project as a whole, in relation to how the whole process went, the aims that were or weren't fulfilled, and the personal development effect that it had. 7.1 Fulfilling Project Aims This section is relatively simple to assess. Previously (in 2.3.3), there were nine points outlined that needed to be achieved in order to consider this project successful. Here it can be seen if these points were all completed: 1. There is an availability for both Users and Developers to discuss not only amongst themselves, but with each other - bridging their communities 2. The Developer can upload their original content to the site 3. That Developer is then given a unique piece of text that allows them to create a post around that content 4. There is an ability for the Developer to edit said post 5. A Developer can login, use, and manage their own account 6. While linking capabilities to repositories etc. remain only at the URL level, this still is possible on the site 7. Moderators are able to edit any/all posts or discussions 8. Moderator have access to the database and can manage it simply and effectively 9. Moderators also manage emails, which are sent automatically as well as by users to a site-registered email address It is clear that the aims of the project were fulfilled, and thus the project overall can be deemed a success. However, we must add to this by comparing ConnectIt to the comparison chart made in part 2.2.1, and look at where it improves on those attempts. When this is compared with what is above in 2.2.1, it is very clear that ConnectIt not only fulfils its aims, but also meets the market needs where the other sites and plug-ins do not. Finally, the title of this project calls for the **bridging** of two communities. This is what has been achieved through the completion of the project. ### 7.2 Critique While the project was a success, that does not mean that it is beyond critique. There were some aspects that fell short of the expected standard. #### 7.2.1 Content from Other Sites One extra addition to ConnectIt that was desired was bringing in content from other sites (e.g. GitHub), and finding better ways to link to it within posts. This was never realised however, as although it was attempt numerous times (and with a significant amount of time and effort involved), nothing came to fruition. In an ideal situation this would have been an aspect that did work within the site and benefitted the project overall. It was a shame that it did not happen. #### 7.2.2 Coding Issues There were a number of coding issues that came up over the course of the project. Some (such as 7.2.1 above) were not rectified, while others (such as pull requests) were. This was all detrimental to the project overall, and would have been better avoided. More apt project management, more time spent learning the coding languages, and more specific and refined research into the problem areas would have went a long way to avoiding this problem for the project. 7.2.3 Set-Up Issues Finally, there were also issues with the set-up. Too much time was spent in learning the set-up of Wordpress, finding a web hosting service, and getting the site up and running. While a large portion of this can be attributed to working with this software and software systems for the first time, an equally large portion should have had time budgeted for that. Added to the issues involved with time management, was perhaps the self-taught aspect of the new software, where seeking help would have been more appropriate and could have aided the project. 7.3 Personal Development Personal development was also a huge part to this project. Growth was seen in numerous areas that allowed for a vast amount of development. Firstly, the learning of new languages: PHP, JavaScript etc., and development in other languages: HTML etc., as well as the learning of new software: Wordpress, online web-hosting, phpMyAdmin etc., were all aspects that grew out of this project. Not only had they not been used before, but they all involved a steep learning curve that forced concentration and dedication. Secondly, project management was not a foreign concept, and had been used many times in the past. However, the sole and complete management of this project was a challenge - one which was successfully overcome. This added to personal growth in the areas of management, project development, time management, resource allocation and content management (among others) was a great bonus to completing the project. Chapter 8 - Conclusion: Part 2 - Future Work This chapter takes a look at everything that wasn’t done for the project. It does this by looking at what could be improved on the project, what could be changed and done in a different way, and what developments could be made in the future. 8.1 What Could Be Improved? For this project, there were two very specific features that were felt could have been implemented, but due to time and coding constraints weren’t. They would be a great improvement to the site. 8.1.1 Download Function The first of these is a download function for the Developers' uploaded code. This would allow Users to download full copies of executable code without having to go to an external source (such as a GitHub repository). The hopes would be that this would add to both User and Developer experiences for the site. 8.1.2 Login with Other Platforms Another feature was the ability to login with other platforms (e.g. logging in with GitHub username-password). This would allow greater ease of use for Developers, thus making sure that they use ConnectIt more than they currently do. 8.2 What Could Be Done Differently? Were somebody else to tackle this project, they could have done some things differently. Here are a couple of examples of that (with the possibility that they would be implemented in the future). 8.2.1 Greater Integration of Other Sites' Content By having a completely self-contained site, it did not take advantage of using the content that could be gleaned, imported, or linked from other sites (once again, see GitHub, BitBucket). This was a decision made due to the desire to keep the site self-contained, rather than being an amalgamation of other sites and their content. However, if done correctly, it could be achieved in a manner that would benefit the site and attract more users/content. 8.2.2 Plug-in Rather than Full Website As per the initial idea of having a plug-in resting over other sites' content, the project could have followed its initial trajectory and developed in full as a plug-in. While this did not happen it was obviously considered as an option. This would once again benefit by linking with other sites and their content rather than simply self-containment, thus potentially expanding the reach of the project. 8.3 What Scope Is There For Future Developments? 8.3.1 Build on What's Mentioned Above The most obvious move for future development is to look at what has already been mentioned in this chapter. That is to say, the next steps should be to implement: a download function, login with other platforms, greater linking to other sites' content, and development of a plug-in to go alongside the website. All of these options are considered feasible possibilities that should be implemented in the future after careful design considerations. 8.3.2 Teaching Facility As a secondary requirement (i.e. not completely necessary) for this project, exposure to learning how to code was suggested. Through gradual exposure to code this is something that could happen with ConnectIt. However, to build on this, it could be moved to a primary requirement, and a full teaching facility could be implemented that would keep Users on the site, rather than pushing them to W3schools or Codecademy. Bibliography References List of Websites - AgilityHoster: http://www.agilityhoster.com/ - BitBucket: https://bitbucket.org/ - Codecademy: http://codecademy.com - CoderDojo: http://coderdojo.com - ConnectIt: http://ruairikell.com - Drupal: https://drupal.org/ - Get Satisfaction: https://getsatisfaction.com - GitHub: https://github.com - Joomla: http://www.joomla.org/ - LoginRadius: http://www.loginradius.com/ - Netbeans: https://netbeans.org/ - OneAll: http://www.oneall.com/ - Sourceforge: http://sourceforge.net - Stack Overflow: http://stackoverflow.com - UserEcho: http://userecho.com/ - Uservoice: https://www.uservoice.com - Wordpress: http://wordpress.org - W3schools: http://w3schools.com - XAMPP: https://www.apachefriends.org/index.html Appendices 1: Original Idea The original idea for this project was to have a communications style plug-in/s that would be integrated with other plug-ins and tools on a CMS (Content Management System) platform. This tool would allow users of that to discuss with and query directly the Developers of whatever feature they were using from the very website where they were using said feature. This would be instead of having to go to a Developer site/repository (for example) where the code would be contained, and attempt to query from there, or try to flag an issue. The idea was drafted (see Appendix 2 below) to draft requirements stage, and some level of implementation was achieved from this. As can be seen, the draft requirements documentation closely resembles the final requirements document, and ergo, the final implementation did end up achieving some of what was contained in the original idea. What it did achieve however, was in the aims (with regard to discussion etc.) and not the specific architecture. 2: Original Draft Requirements Document Title & Abstract: Project: Bridging User and Developer Community Discussions for CMS Plug-ins The purpose of this project is to look at the community aspect of user generated content, and developers' additions/changes to this content, with the end goal of creating a web-app platform that allows for social connectivity between content users and developers. This social connectivity will allow for community style discussion on the use and development of the content. Background & Purpose: This project, as described above, came about due to what I perceived to be a need for a form of social/online interactivity between users and developers. As such, research was done during the period of August - September 2013, and eventually the project title was conceived, with the aim of creating a plug-in style interface/web-app to allow this to happen. Introduction: In order to begin this project, I have had to decide on what the best methodology would be to use, and for this type of project it came down to either using a "waterfall" style approach, or using an "Agile" style approach. Having said this, I believe that it is entirely possible for me to use aspects from both of these styles. I feel that Agile aspects will be in place as I will use numerous iterations of the process in order to further the development of my project. However, I do feel that there will still be more of a focus on one particular aspect each time around, following with a waterfall approach. For example, my project may have four phases, with phase one being Requirements, phase two being Design, phase three being Development (coding and testing), and phase four being Launch (final testing and 'okayeing' the project). This would follow with a waterfall approach. However, during each of these stages I would also be incorporating each of the other aspects, e.g. doing building and testing right through phase one, two and three. This would follow with an Agile approach. Functionality: Here I hope to address what exactly I want to have the end product able to do. The end product must by definition be accessible to both developers and users. As we can see from the use case diagram, this means that it must contain the following: - Commenting functionality for both user and developer - Ability for developer to link to their content (which is stored elsewhere) - Ability for both users and developers to edit said content - Login capabilities for both users and developers - Linking capabilities for both users and developers - Capability for storage of user and developer logins - Capability to store edit and revision logs of content **Software/Hardware:** In order that I should be able to develop this properly, I had to decide what I would use to develop the project. In terms of hardware, there is nothing needed for this other than my own laptop. For software, there is a little to be discussed: - Using Notepad++ to write code. - Plan to develop as a Wordpress plug-in, to ease the amount of unnecessary coding to be done. - Most development should take place using JavaScript, jQuery, and PHP, and possibly some HTML5 - Some research has already been done on these languages and a small amount of previous use, however a lot more learning is necessary. **Performance:** With links to both an external repository as well as being developed as a Wordpress plug-in, and being relatively lightweight, there should be few problems with hosting, speed etc. Users should be always able to use this product without any hitch. **Considerations:** For this project, there are a number of things that need to be taken into consideration, such as: - Security for logins - Ethics forms for testing - How much maintenance will be necessary? - Are there any other legalities that need researching? **Limitations/Possible Issues:** Some of the issues that may be faced, and limitations that we have are: - Limited amount of time - Scope creep - Passing ethics forms - Usage of any new software/languages 3: Complete Terms and Conditions from Site Statement of Rights and Responsibilities By using or accessing this website, you agree to all that is contained in this statement. 1. Privacy We aim to respect all elements of your privacy. Your email address and personal information will not be shared by us, but may be shared by you. 2. Your content 1. Any post that you make may be edited or deleted by you. 2. If the administration deem any content not following the standards set out in Section 6, they may remove it. 3. Any file that is uploaded will remain on our server and will not be removed when a post is deleted. If you wish to have a file removed, you must email admin@ruairikell.com with your request. 4. If you make a comment or forum post/reply, you have the option of removing or editing it. If there is a problem with it, email admin@ruairikell.com with your request. 3. Your Account 1. If you register for an account, your username and email address is shared with the administrator. 2. You have access to your own account and may edit the settings, but you do not have access to other users accounts. 3. If your account, activity or behaviour does not comply with the standards set out in Section 6 the administrator has the right to remove your account. 4. You will not create more than one account. 5. You will not share your password, or let anyone else access your account. 6. You will not transfer your account to anyone else. 4. Other People’s Rights 1. You will not post content that infringes or violates someone else’s rights, or violates the law. 2. This site is intended for sharing, and as such all content is deemed open source. You do not have the right to violate this by claiming others work as your own, or posting non-open source or illegal content here. 3. If you are collecting information about others, you will make it clear to them that you are doing so, and not obtain any information under false pretenses. 5. Discussion 1. Comments and forum posts/replies must comply to the standards set out in Section 6. 2. Comments and forum posts/replies may be edited, held for moderation, or deleted by the administrator at any time. 3. Any Facebook commenting system issues or problems may be on Facebook’s part, or on our part. As such, contact the administrator at admin@ruairikell.com immediately in order to allow us to solve the problem. 6. Standards This section outlines all the standards you must keep to in creating, posting, commenting, and discussing on this website. Failure to do so may result in termination of your account and possible legal action from the party or parties affected. 1. Violence and threats are not permitted. You may not credibly threaten others, or organise any acts of violence. When we perceive a genuine risk of harm or safety, we will remove the content, with the possibility of law enforcement involvement thereafter. Any content relating to the discussion of previous acts covered above will also be removed. 2. Any discussion relating to the promotion of self-harm, suicide, or related matters (including drug-abuse, self-mutilation, and eating disorders) will be removed. 3. We do not permit bullying or hate-speech. We encourage critical and challenging discussion on any content on this site, and we make a distinction between humourous and serious speech. We do not permit attacks on others based on their race, ethnicity, national origin, religion, gender, sexual orientation, disability or medical condition. 4. Any content deemed too graphic (covering gore, abuse, extreme violence and sadism), will be removed. 5. Nudity and pornography is not allowed on this site and will be removed. 6. You are only allowed to share content on this site that is open source, creative commons, or that you have expressly asked for the intellectual property rights to be allowed to post it. 7. You may not promote the consumption of goods in a form deemed to be advertising (by the administrator). This content will be removed. You also may not complete any transactions involving currency, trading or bartering on this site. 8. Any attempts at spam or phishing will be dealt with severely, and all content will be removed. 4: Screenshots 5: List of Users and Passwords for Access - **Username:** User3 **Password:** iYrQgYqDPXBw - **Username:** User5 **Password:** zHFDsUNTBh6B - **Username:** User6 **Password:** FkFryEhXEyfJ
{"Source-Url": "https://www.cs.tcd.ie/publications/projects/fyp.14/TCD-CS-FYP-2014-35.pdf", "len_cl100k_base": 16226, "olmocr-version": "0.1.50", "pdf-total-pages": 55, "total-fallback-pages": 0, "total-input-tokens": 99672, "total-output-tokens": 18479, "length": "2e13", "weborganizer": {"__label__adult": 0.0007872581481933594, "__label__art_design": 0.0013294219970703125, "__label__crime_law": 0.0005726814270019531, "__label__education_jobs": 0.03411865234375, "__label__entertainment": 0.00022935867309570312, "__label__fashion_beauty": 0.00032830238342285156, "__label__finance_business": 0.0019683837890625, "__label__food_dining": 0.0006880760192871094, "__label__games": 0.0014581680297851562, "__label__hardware": 0.0011529922485351562, "__label__health": 0.00041294097900390625, "__label__history": 0.0005540847778320312, "__label__home_hobbies": 0.0006937980651855469, "__label__industrial": 0.0005650520324707031, "__label__literature": 0.0008726119995117188, "__label__politics": 0.0003418922424316406, "__label__religion": 0.0007238388061523438, "__label__science_tech": 0.0032367706298828125, "__label__social_life": 0.000568389892578125, "__label__software": 0.01151275634765625, "__label__software_dev": 0.9365234375, "__label__sports_fitness": 0.0003879070281982422, "__label__transportation": 0.0006957054138183594, "__label__travel": 0.00034546852111816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73867, 0.03422]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73867, 0.20181]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73867, 0.9617]], "google_gemma-3-12b-it_contains_pii": [[0, 259, false], [259, 1308, null], [1308, 2727, null], [2727, 3177, null], [3177, 5304, null], [5304, 6608, null], [6608, 8933, null], [8933, 11142, null], [11142, 13413, null], [13413, 15850, null], [15850, 17258, null], [17258, 18911, null], [18911, 20674, null], [20674, 22359, null], [22359, 22868, null], [22868, 24340, null], [24340, 25961, null], [25961, 27703, null], [27703, 28948, null], [28948, 29809, null], [29809, 31159, null], [31159, 32599, null], [32599, 34283, null], [34283, 35754, null], [35754, 37567, null], [37567, 38357, null], [38357, 39223, null], [39223, 40154, null], [40154, 40930, null], [40930, 41581, null], [41581, 42886, null], [42886, 44494, null], [44494, 45542, null], [45542, 46878, null], [46878, 48327, null], [48327, 48986, null], [48986, 50794, null], [50794, 52426, null], [52426, 54183, null], [54183, 54806, null], [54806, 56421, null], [56421, 57930, null], [57930, 59458, null], [59458, 61210, null], [61210, 62742, null], [62742, 63873, null], [63873, 64375, null], [64375, 66196, null], [66196, 68072, null], [68072, 69439, null], [69439, 70566, null], [70566, 71836, null], [71836, 73660, null], [73660, 73675, null], [73675, 73867, null]], "google_gemma-3-12b-it_is_public_document": [[0, 259, true], [259, 1308, null], [1308, 2727, null], [2727, 3177, null], [3177, 5304, null], [5304, 6608, null], [6608, 8933, null], [8933, 11142, null], [11142, 13413, null], [13413, 15850, null], [15850, 17258, null], [17258, 18911, null], [18911, 20674, null], [20674, 22359, null], [22359, 22868, null], [22868, 24340, null], [24340, 25961, null], [25961, 27703, null], [27703, 28948, null], [28948, 29809, null], [29809, 31159, null], [31159, 32599, null], [32599, 34283, null], [34283, 35754, null], [35754, 37567, null], [37567, 38357, null], [38357, 39223, null], [39223, 40154, null], [40154, 40930, null], [40930, 41581, null], [41581, 42886, null], [42886, 44494, null], [44494, 45542, null], [45542, 46878, null], [46878, 48327, null], [48327, 48986, null], [48986, 50794, null], [50794, 52426, null], [52426, 54183, null], [54183, 54806, null], [54806, 56421, null], [56421, 57930, null], [57930, 59458, null], [59458, 61210, null], [61210, 62742, null], [62742, 63873, null], [63873, 64375, null], [64375, 66196, null], [66196, 68072, null], [68072, 69439, null], [69439, 70566, null], [70566, 71836, null], [71836, 73660, null], [73660, 73675, null], [73675, 73867, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73867, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73867, null]], "pdf_page_numbers": [[0, 259, 1], [259, 1308, 2], [1308, 2727, 3], [2727, 3177, 4], [3177, 5304, 5], [5304, 6608, 6], [6608, 8933, 7], [8933, 11142, 8], [11142, 13413, 9], [13413, 15850, 10], [15850, 17258, 11], [17258, 18911, 12], [18911, 20674, 13], [20674, 22359, 14], [22359, 22868, 15], [22868, 24340, 16], [24340, 25961, 17], [25961, 27703, 18], [27703, 28948, 19], [28948, 29809, 20], [29809, 31159, 21], [31159, 32599, 22], [32599, 34283, 23], [34283, 35754, 24], [35754, 37567, 25], [37567, 38357, 26], [38357, 39223, 27], [39223, 40154, 28], [40154, 40930, 29], [40930, 41581, 30], [41581, 42886, 31], [42886, 44494, 32], [44494, 45542, 33], [45542, 46878, 34], [46878, 48327, 35], [48327, 48986, 36], [48986, 50794, 37], [50794, 52426, 38], [52426, 54183, 39], [54183, 54806, 40], [54806, 56421, 41], [56421, 57930, 42], [57930, 59458, 43], [59458, 61210, 44], [61210, 62742, 45], [62742, 63873, 46], [63873, 64375, 47], [64375, 66196, 48], [66196, 68072, 49], [68072, 69439, 50], [69439, 70566, 51], [70566, 71836, 52], [71836, 73660, 53], [73660, 73675, 54], [73675, 73867, 55]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73867, 0.06545]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
3aba588742c764b968e90fa7a3a4a32b010e8a9b
NA TCracker: NAT Combinations Matter Roberto Roverso1,2, Sameh El-Ansary1,3, and Seif Haridi2 1 Pierialism Inc., Sweden, 2 KTH-Royal Institute of Technology, Sweden, 3 Nile University, Egypt {roberto,sameh}@pierialism.com, seif@it.kth.se Abstract—In this paper, we report our experience in working with Network Address Translators (NATs). Traditionally, there were only 4 types of NATs. For each type, the (im)possibility of traversal is well-known. Recently, the NAT community has provided a deeper dissection of NAT behaviors resulting into at least 27 types and documented the (im)possibility of traversal for some types. There are, however, two fundamental issues that were not previously tackled by the community. First, given the more elaborate set of behaviors, it is incorrect to reason about NA Ts which is a second outcome of this paper. Second, there is a serious need for some kind of formalism to reason about NA Ts which is a second outcome of this paper. The results were obtained using our own scheme which is an augmentation of currently-known traversal methods. The scheme is validated by reasoning using our formalism, simulation and implementation in a real P2P network. I. INTRODUCTION Dealing with Network Address Translators (NATs) is nowadays an essential need for any P2P application. The techniques used to deal with NAT have been more or less “coined” and there are several widely-used methods[1][2]. Some of them are rather a defacto standard like STUN [3], TURN [4], ICE [5]. In the context of our a P2P live video streaming application PeerTV, we are mainly concerned with media streaming using UDP and therefore the scope of this paper is UDP NAT traversal. Moreover, we are strictly interested in solutions that do not use relay, such as TURN for instance, due to the high bandwidth requirements of video streaming. We have found lots of of previous work on the subject that aims to answer the following question: For every t in the set of NAT types T, which s in the set of traversal strategies S should be used to traverse t? The answer is of the form f : T → S. i.e. the following is an example with a couple of types f : { Simple Hole Punching, Port-Prediction } → { Full-Cone, Symmetric} [6]. However, the point which we found not gaining enough attention is that the presence of a feasible traversal technique that enables two peers behind NAT to communicate depends on the “combination” of the NAT types and not on the type of each peer separately. Thus, the question should be: “Given 2 peers pα and pβ with respective NAT types t(pα) and t(pβ), which traversal strategy s is needed for p1 and p2 to talk? The answer is of the form f : T×T→ S”, i.e we need to analyze traversable combinations rather than traversable types. Most works contain a few examples of combinations for explanation purposes [6][7]. However, we have failed to find any comprehensive analysis that states, for every possible combination of NAT types, whether direct (i.e. with no relay) connectivity is possible and how. The analysis is more topical given that NAT community is switching from the classical set of NAT types $T_{classic} = \{ \text{Full-Cone, Restricted-Cone, Port-Restricted, Symmetric}\}$ [3] to a more elaborate set that defines a NAT type by a combination of three different policies, namely, port mapping, port allocation and port filtering [8]. With that, a statement like "two peers behind symmetric NAT can not communicate" becomes imprecise, as we will show that in many cases it is possible given the nuances available in the presently wide spectrum of NAT types. II. RELATED WORK The work in [7] includes a matrix for a number of combinations, however mostly drawn from $T_{classic}$ rather than the more elaborate classification in [8]. The work in [6] is probably the closest to ours, one can see our work as a superset of the set of combinations mentioned in that work. III. NAT TYPES AS COMBINATIONS OF POLICIES In this section we try to semi-formally summarize the more elaborate classification of NATs known as “BEHAVE-compliant”[8] and craft the notation that we will use in the rest of the paper. Notation. Let $n_a$ and $n_b$ be NAT gateways. For $i \in \{a, b\}$, let $P_i = \{p_i, p'_i, p''_i, \ldots \}$ be the set of peers behind $n_i$. An "endpoint" $e$ is a host-port pair $e = (h, p)$, where $h(e)$ is the host of $e$ and $p(e)$ is its port. Let $V_i = \{v_i, v'_i, v''_i, \ldots \}$ denote the set of all private endpoints of all peers behind $n_i$ and $U_i = \{u_i, u'_i, u''_i, \ldots \}$ be the set of public endpoints of $n_i$, i.e $\forall v \in V_i, h(v) \in P_i$ and $\forall u \in U_i, h(u) = n_i$. When a packet is sent out from a certain private endpoint $v_i$ of a peer $p_i$ behind a gateway $n_i$, to some public endpoint $d$, a rule in the NAT table of $n_i$ is created. We define the set of NAT table rules $R_i = \{r_i, r'_i, r''_i\}$ at $n_i$, the rule records the fact that some public port $u_i$ and some private port $v_i$ are associated, e.g $r_a = (v_a \leftrightarrow u_a)$. The behavior of a gateway $n_i$ is defined by three policies, namely, port mapping, port filtering and port allocation. We use the notation $f(n_i), m(n_i), a(n_i)$ to denote the respective policies of gateway $n_i$. Dealing with Network Address Translators (NA Ts) is nowa- A. Mapping Policy The mapping policy is triggered every time a packet is sent from a private endpoint \( v_i \) behind the NAT to some external public port \( d \). The role of a mapping policy is deciding whether a new rule will be added or an existing one will be reused. We use the notation: 1) \( \frac{v_i, d}{r_i} \) to specify that the sending of a packet from \( v_i \) to \( d \) resulted in the creation of a new NAT rule \( r_i \). 2) \( v_i, d \Rightarrow r_i \) to specify that the sending of the packet used an already existing rule \( r_i \). 3) \( v_i, d \not\Rightarrow r_i \) to specify that the sending of the packet did not reuse any \( r_i \) in particular because of some “reason”. Irrespective of the mapping policy, whenever a packet is sent from a private port \( v_i \) to an arbitrary public destination endpoint \( d \) and \( \frac{v_i, d}{r_i} \in R_i \) of the form \( r_i = (v_i \leftrightarrow u_i) \), for an arbitrary \( u_i \), the following is true \( v_i, d \Rightarrow r_i \). However, if such a mapping exists, the mapping policy would make the reuse decision based on the destination. For all subsequent packets from \( v_i \) to \( d \), naturally \( v_i, d \Rightarrow r_i \). However, for any \( d' \neq d \), there are 3 different behaviors: - **Endpoint-Independent, \( m(n_i) = EI \):** \[ \frac{v_i, d}{r_i}, \text{ for any } d' \] - **Host-Dependent, \( m(n_i) = HD \):** \[ \frac{v_i, d}{r_i}, \text{ iff } h(d) = h(d') \] \[ \frac{v_i, d}{r_i'}, \text{ iff } h(d) \neq h(d'), \text{ where } r_i' = (v_i \leftrightarrow u'_i) \] - **Port-Dependent, \( m(n_i) = PD \):** \[ \frac{v_i, d}{r_i'} \] Having introduced the different policies, we decorate the notation of the rule to include the criteria that will be used to decide whether a certain rule will be reused as follows: \[ \begin{align*} & \frac{v_i \leftrightarrow u_i}{r_i} \quad \text{if } m(n_i) = EI \\ & \frac{v_i \leftrightarrow m(v_i \rightarrow h(d), * \leftrightarrow u_i)}{r_i} \quad \text{if } m(n_i) = HD \\ & \frac{v_i \leftrightarrow m(v_i \rightarrow d, *)}{r_i} \quad \text{if } m(n_i) = PD \end{align*} \] Where the syntax \( m : x \rightarrow y \) means that the rule will be reused if the source endpoint of the packet is \( x \) and the destination is \( y \). The * denotes any endpoint. **Order.** We impose the order \( EI < HD < PD \) according to the increasing level of restrictiveness. B. Allocation Policy. Every time a new \( r_i \) is added to \( R_i \), a new public endpoint \( u_i \) is bound. This policy allocates \( p(u_i) \). That is, the mapping policy decides when to bind a new port and the allocation policy decides which port should be bound as follows: 1) **Port-Preservation, \( a(n_i) = PP \):** Given \( v_i, d \vdash r_i \), where \( r_i = (v_i \leftrightarrow u_i) \), it is always the case that: \( p(u_i) = p(v_i) \). Naturally, this may cause conflicts if any two \( p_i \) and \( p_i' \) behind \( n_i \) decided to bind private endpoints with a common port. 2) **Port Contiguity, \( a(n_i) = PC \):** Given any two sequentially allocated public endpoints \( u_i \) and \( u_i' \), it is always the case that: \( p(u_i') = p(u_i) + \Delta \), for some \( \Delta = 1, 2, \ldots \) 3) **Random, \( a(n_i) = RD \):** \[ \forall u_i, p(u_i) \text{ is allocated at random.} \] **Order.** We impose the order \( PP < PC < RD \) according to the increasing level of difficulty of handling. C. Filtering Policy. The filtering policy decides whether a packet from the outside world to a public endpoint of a NAT gateway should be forwarded to the corresponding private endpoint. Given an existing rule \( r_i = (v_i \leftrightarrow u_i) \) that was created to send a packet from \( v_i \) to \( d \), we use the notation: 1) \( r_i \leftarrow \hat{u}_i, s \) to denote that the **receive** of a packet from the public endpoint \( s \) to \( u_i \)’s public endpoint \( u_i \) is permitted by \( r_i \) 2) \( r_i \not\leftarrow \hat{u}_i, s \) to denote that the receive is not permitted because of some “reason”. There are 3 filtering policies with the following conditions for allowing receive: - **Endpoint-Independent, \( f(n_i) = EI \):** \[ \hat{u}_i, s, \text{ for any } s \] - **Host-Dependent, \( f(n_i) = HD \):** \[ \hat{u}_i, s, \text{ iff } h(s) = h(d) \] - **Port-Dependent, \( f(n_i) = PD \):** \[ r_i \leftarrow \hat{u}_i, s, \text{ iff } s = d \] We also decorate the rules to include conditions for accepting packets as follows: \[ r_i = \begin{cases} \frac{v_i \leftarrow \hat{u}_i}{u_i} & \text{if } f(n_i) = EI \\ \frac{v_i \leftarrow (h(d), *)}{u_i} & \text{if } f(n_i) = HD \\ \frac{v_i \leftarrow (h(d), *)}{u_i} & \text{if } f(n_i) = PD \end{cases} \] **Order.** We impose the order \( EI < HD < PD \) according to the increasing level of restrictiveness. D. The Set of NAT Types Having defined the above policies, the NAT type of a given NAT gateway is simply a matter of listing which behavior is used for each of the policies. We define the set of triplets representing all possible NAT types \( \tau = \{ (m, a, f) | f, m \in \{ EI, HD, PD \}, a \in \{ PP, PC, RD \} \} \). IV. NAT TYPE DISCOVERY Before traversing a NAT gateway, one needs to know its type. STUN [3] is the most-widely used method for accomplishing this and there exists many publicly-available STUN servers that assist in the discovery process. The original STUN algorithm produces a classification withdrawn from the set \( \tau_{\text{classic}} \). More recently, [6], [8] have re-used the STUN infrastructure to get more detailed information, namely, knowing the filtering and the mapping policies. Due to space limitations and the fact that our main focus is on traversal strategies, we will not delve into the details of performing the discovery process. However, we just need to clarify that in the spirit of [6], [8], we have expanded the scope of the discovery process to discover information about the allocation policy. With that, our classification is capable of reporting all elements in the set \( \tau \). V. NAT TRAVERSAL TECHNIQUES We explain our traversal techniques which are an augmented version of the well-known techniques in [1]. **Basic Assumptions.** We assume that there is a Rendez-vous server with public IP referred to by \( z \). The traversal process always starts after: i) two Peers \( p_a \) and \( p_b \) respectively behind NATs \( u_a \) and \( u_b \) register themselves at \( z \) and have an “out-of-band” communication channel with \( z \), which is in our case a TCP connection initiated by the peer, we refer to all endpoints of \( z \) and \( z \) itself by the same symbol; ii) The 2 peers know that they need to communicate and know the other peer’s public IP, i.e. the corresponding NAT IP; some peers supply additional information during registration as we will shortly explain in Section VII-B; iii) all the policies of \( p_a, p_b \) are known to \( z \) using a discovery process before any traversal process takes place. VI. SIMPLE HOLE-PUNCHING (SHP) A. Traversal Process 1. \( p_a \) sends from some \( v_a \) to \( z \) through \( u_a \). 2. \( n_a \) creates \( r_a = (v_a \leftrightarrow u_a) \) and forwards to \( z \). 3. \( z \) receives and consequently knows \( u_a \). 4. \( z \) informs \( p_b \) about \( u_a \) (Out-of-band). 5. \( p_b \) sends from some \( v_b \) to \( u_a \) through \( n_b \). 6. \( n_b \) creates \( r_b = (v_b \leftrightarrow u_b) \) and forwards to \( u_a \). 7. \( n_a \) receives, if the filtering allows, forwards to \( v_a \) 8. \( p_a \) sends from \( v_a \) to \( u_b \) through \( n_a \), if the mapping allows, \( r_a \) is reused. Otherwise, \( \tau_a \) will be created and sending will occur from some other public endpoint \( u'_a \neq u_a \) 9. \( n_b \) receives, if the filtering allows, forwards to \( v_b \) B. SHP Feasibility **Theorem 6.1:** Simple hole punching is feasible for establishing direct communication between two peers \( p_a \) and \( p_b \) respectively behind \( n_a \) and \( n_b \) if \( \exists n_x \in \{n_a, n_b\} \) s.t. \[ \begin{align*} f(n_a) &= \text{EI} \text{ or } m(n_a) = \text{EI} \text{ or } m(n_x) > \text{EI} \text{ and } f(n_x') < \text{PD.} \end{align*} \] **Proof:** We consider the most restrictive case where \[ \begin{align*} f(n_a) &= f(n_b) = m(n_a) = m(n_b) = \text{PD and } a(n_a) = a(n_b) = \text{RD and show the minimum relaxations that we need to do for SHP to work.} \end{align*} \] By looking at the steps in section VI, and considering all the very restrictive mapping and filtering on both sides, we can see that after steps 5 and 6, \( r_a \) and \( r_b \) will be as follows: \[ \begin{align*} r_a &= \left( \begin{array}{c} \text{u}_a \\ \text{m:}\text{u}_a \rightarrow \text{u}_a \end{array} \right), r_b = \left( \begin{array}{c} \text{u}_b \\ \text{m:}\text{u}_b \rightarrow \text{u}_b \end{array} \right) \end{align*} \] Which will cause the following problems: In step 7: \( r_a \neq u_a, u_a \) and there is nothing that we can relax at \( n_b \) which can help. Instead, we have to relax the filtering at \( p_a \) to indulge receiving on \( u_a \) from \( u_b \) while it was initially opened for receiving from \( u_z \), i.e. \( r_a \) has to tolerate host change which is not satisfied by PD or HD filtering, therefore \( f(n_a) = \text{EI} \) is necessary, resulting into \[ \begin{align*} r_a &= \left( \begin{array}{c} \text{u}_a \\ \text{m:}\text{u}_a \rightarrow \text{u}_a \end{array} \right) \end{align*} \] In step 8: \( v_a, u_b \neq u_a \) and \( v_a, v_b \rightarrow v'_a \) where \( \tau_a' = \left( \begin{array}{c} \text{v}_a \\ \text{m:}\text{v}_a \rightarrow \text{v}_a \end{array} \right) \). Consequently, \( \tau_b \neq u_a, u_b \). To solve this, we have two solutions, the first is to let the mapping reuse \( r_a \) and not create \( \tau'_b \) which needs relaxing \( m(n_a) \) to be \( \text{EI} \), in which case we can keep \( f(n_b) \) as restrictive. The second solution is to keep \( n_a \) as restrictive and relax \( f(n_b) \) to tolerate receiving from \( u'_a \). In the second solution, there is a minor subtlety that needs to be handled, where \( p_b \) has to be careful to keep sending to \( p_a \) on \( u_a \) despite the fact that it is receiving from \( u'_a \). Similarly \( p_a \) should always send to \( p_b \) on \( u_b \) despite the fact it is receiving from \( u'_b \). That is an asymmetry that is not in general needed. C. Coverage of SHP Since \(|\tau| = 27\) types, we have a \( 27 \times 27 = 378 \) distinct combinations of NAT types of two peers. Using Theorem 6.1, we find that 186 combinations, i.e. 49.2% of the total number of possible ones are traversable using the Simple Hole Punching approach. That said, this high coverage is totally orthogonal to how often one is likely to encounter combinations in the covered set in practice, which we discuss in our evaluation (Section IX-A). Traversable SHP combinations are shown in Figure 1 with label SHP(*). To cover the rest of the cases, we use port prediction which enables a peer to punch a hole by sending to the opposite peer instead of \( z \), which makes it possible to tolerate more restrictive filtering and mapping policies, as explained below. VII. PREDICTION A. Prediction using Contiguity (PRC) The traversal process consists in the following steps: 1. \( p_a \) sends two consecutive messages: - from some \( v_a \) to \( z \) through \( n_a \) - from \( v_a \) to \( u_b^{\text{dam}} \), an arbitrary endpoint of \( n_b \) 2. \( n_a \) creates the following two rules: - \( \tau_a' = (v_a \leftrightarrow u'_a) \) and forwards to \( z \) - \( r_a = (v_a \leftrightarrow u_a) \) and forwards to \( u_b^{\text{dam}} \). Actually, the whole point of sending \( u_b^{\text{dam}} \) is to open \( u_a \) by sending to \( n_b \) but be able to predict it at \( z \). 3. The messages are received as follows: B. Prediction using Preservation (PRP) Combinations of NAT behaviors mandated by RFC 4787 are identified by the label BEHAVE in the table’s legend. ![Table](image) Fig. 1. All possible distinct NAT types combinations for two peers a and b with the technique needed to traverse the combination and X for un-traversable combinations. SHP(*), PRC(*) and PRP(*) stand respectively for Simple Hole Punching, Port Prediction using Contiguity and Port Prediction using Preservation. Combinations of NAT behaviors mandated by RFC 4787 are identified by the label BEHAVE in the table’s legend. a) \( z \) receives and consequently knows \( u'_a \) and additionally predicts \( u_a = u'_a + \Delta \) where \( \Delta \) is known during the discovery process. b) \( n_b \) drops the message since no endpoint \( u_b^{dum} \) was ever bound. c) \( z \) informs \( p_b \) about \( u_a \) (Out-of-Band). d) \[ 5 \) Steps 5–9 follow the same scheme as in simple hole punching. **Port scanning.** The process is susceptible to failure if another peer \( p'_a \) happens by coincidence to send a packet between the two consecutive packets. For that, a technique called port scanning [6] is used such that when \( p_b \) tries to connect to \( u_a \), \( p_b \) will try \( u_a + \Delta, u_a + 2\Delta, u_a + 3\Delta \), etc. until a reply is received. Some gateways might identify this as a malicious UDP port scan and block it as is the case in some corporate firewalls. Port scanning might be used only when \( p_b \) will try to connect to \( u_a \) where \( a(n_a) = PC \) has \( m(n_b) < PD \), as shown by [6]. **B. Prediction using Preservation (PRP)** Another technique is to exploit the port-preservation allocation policy. However, to do that, we assume that when a peer with port-preservation policy registers at \( z \), the peer supplies a pool of free candidate ports to \( z \). The main point here is to avoid conflicts with ports of other peers behind the same NAT. The rendez-vous server \( z \) is stateful regarding which ports are bound by each NAT and chooses from the pool of the ports supplied by the peer a port which is not already bound. 1) \( z \) chooses some arbitrary port \( \rho \) and tells \( p_a \) (Out-of-Band) to bind \( \rho \). 2) \( p_a \) sends from \( u_a \) where \( p(u_a) = \rho \) to \( u_b^{dum} \) through \( n_a \). 3) \( n_a \) creates a new rule \( r_a = (u_a \leftrightarrow u_b)u^{dum}_b \) and forwards to \( u_b^{dum} \) and since \( a(p_a) = PP, p(u_a) = p(v_a) = \rho \). 4) \( z \) informs \( p_b \) about \( u_a \) (Out-of-Band). 5) Steps 5–9 follow the same scheme as in SHP. Note that the process is shorter than prediction by contiguity and \( z \) chooses the port for the peer behind NAT instead of the NAT of the peer deciding it and \( z \) observing it. However, for the sake of reasoning, the two are equivalent because what matters is what happens after the opposite peer learns about the punched port irrespective of how the port was predicted. **C. Prediction-on-a-Single-Side Feasibility** **Theorem 7.1:** Prediction using contiguity or preservation on a single side is feasible for establishing direct communication between two peers \( p_a \) and \( p_b \) respectively behind \( n_a \) and \( n_b \) if: - **Condition 1:** \( \exists x \in \{n_a, n_b\} \) s.th. \( a(n_x) < RD \) and \( f(n_x) < PD \) - **Condition 2:** Either \( m(n_x) < PD \) or \( m(n_x) = PD \) and \( f(n_x) < PD \). **Proof:** Similar to theorem 6.1, we start with the most restrictive policies and we relax until prediction is feasible. The allocation policy of the side to be predicted \( n_a \) in Section VII-B, VII-A) can not be random, because the whole idea of prediction relies on a predictable allocation policy, thus the needed relaxation is \( a(n_a) < RD \). In both prediction techniques, the dummy packet from \( p_a \) punches a hole by sending to \( p_b \), in contrast to SHP which punches by sending to \( z \). Nevertheless, it is sent to a dummy port of \( p_b \). After steps 5, 6: \[ r_a = \left( v_a \xleftarrow{f:u_a\leftrightarrow u_b^{dum}} u_a \right), r_b = \left( v_b \xleftarrow{f:u_b\leftrightarrow u_a^{dum}} u_b \right) \] In step 7: \( r_a \neq u_b \), we have to relax the filtering at \( p_a \) to indulge the port difference from \( u_b \), but we tolerate host sensitivity. The needed relaxation is: \( f(n_a) < PD \) resulting into: \[ r_{a} = \left( v_{a} \leftarrow f_{u_{a} \leftarrow (n_{a}, \ast)} \mapsto u_{a} \right) \] In step 8: the reasoning about relaxing the mapping on \( p_{a} \) or the filtering of \( p_{b} \) is identical to Theorem 6.1 except that host-sensitivity is tolerable and thus either \( m(n_{a}) < PD \) or is kept \( m(n_{a}) = PD \) and in that case, the needed relaxation is \( f(n_{b}) < PD \). D. Coverage of PRP & PRC PRP and PRC together cover another 18% of the combinations. That said, we can say that PRP is as good as SHP in terms of traversal time and success rate (see Section IX), which means in addition to the cases where PPR on a single side is used in Figure 1, we can also use PRP instead of SHP when the allocation policy is port preservation. VIII. Interleaved Prediction on Two Sides The remaining combinations are these not covered by SHP nor prediction. The final stretch to go to is to do simultaneous prediction on both sides. However, it is a seemingly tricky deadlock situation because every peer needs to know the port that will be opened by the other peer without the other peer sending anything. Which we solve as follows. Interleaved PRP-PRP. In this case actually double prediction is very simple because the rendez-vous server can pick a port for each side and instruct the involved peers to simultaneously bind it and start the communication process. Interleaved PRP-PRC This case is also easily solvable thanks to preservation. Because \( z \) can inform the peer with a port contiguity allocation policy about the specific endpoint of the opposite peer. The latter in turn will run a port prediction process using the obtained endpoint in the second consecutive message. Interleaved PRC-PRC This one is the trickiest and it needs a small modification in the way prediction by contiguity is done. The idea is that the two consecutive packets, the first to \( z \) and the second to the opposite peer can not be sent after each other immediately. Instead, both peers are commanded by \( z \) to send a packet to \( z \) itself. From that, \( z \) deduces the ports that will be opened on each side in the future and sends to both peers informing them about the opposite peer’s predicted endpoint. Both peers in their turn send a punching packet to each other. The problem with this scheme is that there is more time between the consecutive packets which makes it more susceptible to the possibility of another peer behind any of the NATs sending a packet in in between. Like the case in single PRC, port scanning is the only resort, but in general this combination has lower success rate compared to single PRC (see Section IX). For our reasoning, we will work on the last one (PRC-PRC), since it is a general harder case of the first two. A. Traversal Process 1) \( z \) tells \( p_{a} \) & \( p_{b} \) to start prediction (Out-of-Band) 2) \( p_{a} \) & \( p_{b} \) both send to \( z \) through \( n_{a} \) & \( n_{b} \) respectively resulting in the new rules \( r'_{a} = (v_{a} \leftrightarrow u_{a}', r'_{b} = (v_{b} \leftrightarrow u_{b}') \) 3) \( z \) receives \( u_{a} \) & \( u_{b} \), observing \( u_{a}' \) & \( u_{b}' \) and deducing \( u_{a} = u_{a}' + \Delta \) & \( u_{b} = u_{b}' + \Delta \) 4) \( z \) informs \( p_{a} \) & \( p_{b} \) about \( u_{a} \) & \( u_{b} \) respectively (Out-of-Band) 5) \( p_{a} \) sends to \( u_{a} \) through \( n_{a} \) and \( p_{b} \) sends to \( u_{b} \) through \( n_{b} \) 6) \( n_{a} \) receives and forwards to \( v_{b} \) and \( n_{b} \) receives and forwards to \( v_{a} \) A race condition where step 6 for one of the peers happens before the opposite peer starts to run step 5 can take place resulting into a packet drop. However, the dropped packet opens the hole for the opposite peer, and retrying sending is enough to take care of this issue. B. Interleaved Prediction Feasibility Theorem 8.1: Interleaved Prediction is feasible for establishing direct communication between two peers \( p_{a} \) and \( p_{b} \) respectively behind \( n_{a} \) and \( n_{b} \) if both \( a(n_{a}) \) and \( a(n_{b}) \) are < RD Proof: Similar to theorem 6.1, we start with the most restrictive policies and we relax until prediction is feasible. Since we need to predict both sides we need \( a(n_{a}) < RD \) & \( a(n_{b}) < RD \). After step 5 in Section VIII-A, we have: \[ \begin{align*} r_{a} & = \left( v_{a} \leftarrow f_{u_{a} \leftarrow n_{a}} \mapsto u_{a} \right) \\ r_{b} & = \left( v_{b} \leftarrow f_{u_{b} \leftarrow n_{b}} \mapsto u_{b} \right) \end{align*} \] In step 6, we have \( r_{a} \leftarrow (u_{a}, u_{b}) \) and \( r_{b} \leftarrow (u_{b}, u_{a}) \) without the need for any relaxations on the filtering nor the mapping of either sides. C. Interleaved Prediction Coverage The interleaved prediction covers another 11.9% of the combinations, namely the ones shown in Figure 1 leaving 20.6% of the cases untraversable. That is, approximately 79.4% of all NAT type combinations are traversable and for each combination, we know which technique to use. The more important thing is that not all of them have the same likelihood of being encountered which we discuss in the next section. That said, it worth mentioning that there is a technique in [9] which performs a brute-force search on all possible ports after reducing the search space using the birthday paradox, which we ignored due to low success probability, high traffic and long time requirements. ### IX. Evaluation Apart from the reasoning above, we have done a sanity check on our logic using our emulation platform [10]. That is, we wrote our own NAT boxes, which behave according to the semantics defined in Section III. We also implemented the Rendez-Vous server and the nodes that are capable of performing all the traversal techniques in Section V. For each case in Figure 1, we ran the suggested traversal technique and we made sure direct communication is indeed achievable. Real-life evaluation was needed to gain insights on other aspects like probability of encountering a given type, success rates of traversal techniques and time needed for the traversal process to complete. #### A. Distribution of Types We wanted to know how likely is it to encounter each of the types in $\tau$. We have collected cumulative results for peers who have joined our network over time. As shown in Figure 2: 1. We encountered 13 out of the 27 possible types; 2. We found that $(m = EI, f = PD, a = PP)$ is a rather popular type (approx. 37%) of all encountered types, which is fortunate because port preservation is quite friendly to deal with and it is with a very relaxed mapping; 3. About 11% are the worst kind to encounter, because when two peers of this type need to talk, interleaved prediction is needed with a shaky success probability. #### B. Adoption of BEHAVE RFC By looking at each policy alone, we can see to what extent the recommendations of the BEHAVE RFC [8] $(f = EI/HD, m = EI)$ are adopted. As shown in Table IX, for filtering, the majority are adopting the policy discouraged by the RFC, while for mapping the majority were following the recommendation. For allocation, the RFC did not make any specific relevant recommendation. The percentage of NATs following both recommendations was 30%. #### C. Success Rate Given the set of peers present in the network at one point in time, we conduct a connectivity test where all peers try to connect to each other. We group the result by traversal techniques, e.g. SHP is applicable for 186 combinations, so we average the success rate over all combinations and the whole process is repeated a number of times, we have found (Figure 3) as expected that SHP is rather dependable as it succeeds 96% of the time. We also found that PRP is as good as SHP, which is quite positive given that we found that the probability of occurrence of preservation is quite high in the last section. Interleaved PRP-PRP is also rather good with slightly worse success rate. The three remaining techniques involving PRC in a way or the other are causing the success rate to drop significantly especially for PRC-PRC mainly because of the additional delay for interleaving. #### D. Time to traverse When it comes to the time needed for the traversal process to complete (Figure 4), we find two main classes, SHP and <table> <thead> <tr> <th>Table I</th> <th>DISTRIBUTION OF ENCOUNTERED NAT POLICIES</th> </tr> </thead> <tbody> <tr> <td>Mapping</td> <td>EI</td> </tr> <tr> <td>Filtering</td> <td>EI</td> </tr> <tr> <td>Allocation</td> <td>PP</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td>80.21%</td> <td>0%</td> </tr> <tr> <td>13.54%</td> <td>17.45%</td> </tr> <tr> <td>54.69%</td> <td>23.7%</td> </tr> </tbody> </table> PRP in one class and PRC in another class, even when we do PRC-PRP, it is faster than PRC alone because the number of messages is less. X. Conclusion & Future Work In this paper, we have presented our experience with trying to find a comprehensive analysis of what combinations of NAT types are traversable. We have shown that using a semi-formal reasoning that covers all cases and we provided a slightly augmented versions of the well-known traversal techniques and shown which ones are applicable for which combinations. We have shown that about 80% of all possible combinations are traversable. Using our deployment base for P2P live streaming, we have shown that only 50% of all possible types are encounterable. We have also reported our findings on the success probability and time of traversing the different combinations. For future work: a) Modeling: we would like to enrich the model to make it capture real-life aspects like expiration of NAT rules, multiple levels of NAT, subtleties of conflicts between many peers behind the same NAT, NATs that use different policies in different situations, and support for uPnP and TCP; b) Real-life Evaluation: more insight into the trade-off between success probability and timing, preventing the techniques as being identified as malicious actions in some corporate firewalls; c) Dissemination: releasing our library and simulator as open-source for third-party improvement and evaluation. XI. Acknowledgments We would like to thank all members of Peerialism’s development team for the help and collaboration on the implementation of our NAT traversal techniques, in particular Magnus Hedbeck for his patience and valuable feedback. The anonymous reviewers of ICCCN have provided a really inspiring set of comments that helped us to improve the quality of this paper. REFERENCES APPENDIX Traversal Sequence Diagrams - Simple-Hole Punching (SHP) - Prediction using Preservation (PRP) - Prediction using Contiguity (PRC) - Interleaved Prediction (PRC-PRC)
{"Source-Url": "http://soda.swedish-ict.se/4090/1/NATCracker.pdf", "len_cl100k_base": 8959, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 33935, "total-output-tokens": 10458, "length": "2e13", "weborganizer": {"__label__adult": 0.0003867149353027344, "__label__art_design": 0.00031638145446777344, "__label__crime_law": 0.0004572868347167969, "__label__education_jobs": 0.0008831024169921875, "__label__entertainment": 0.0002789497375488281, "__label__fashion_beauty": 0.00015604496002197266, "__label__finance_business": 0.0004749298095703125, "__label__food_dining": 0.0004367828369140625, "__label__games": 0.00138092041015625, "__label__hardware": 0.006504058837890625, "__label__health": 0.0005168914794921875, "__label__history": 0.000560760498046875, "__label__home_hobbies": 0.0001189112663269043, "__label__industrial": 0.0007185935974121094, "__label__literature": 0.0004677772521972656, "__label__politics": 0.0004096031188964844, "__label__religion": 0.0005106925964355469, "__label__science_tech": 0.445556640625, "__label__social_life": 0.00016570091247558594, "__label__software": 0.07330322265625, "__label__software_dev": 0.465087890625, "__label__sports_fitness": 0.00043582916259765625, "__label__transportation": 0.0006289482116699219, "__label__travel": 0.00029397010803222656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34808, 0.01104]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34808, 0.52906]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34808, 0.87728]], "google_gemma-3-12b-it_contains_pii": [[0, 5352, false], [5352, 10735, null], [10735, 17375, null], [17375, 21799, null], [21799, 26835, null], [26835, 30448, null], [30448, 34808, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5352, true], [5352, 10735, null], [10735, 17375, null], [17375, 21799, null], [21799, 26835, null], [26835, 30448, null], [30448, 34808, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34808, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34808, null]], "pdf_page_numbers": [[0, 5352, 1], [5352, 10735, 2], [10735, 17375, 3], [17375, 21799, 4], [21799, 26835, 5], [26835, 30448, 6], [30448, 34808, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34808, 0.03422]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
c751933b48fe9db16b080252259d8cda0f534f0c
Dynamic Control of CPU Cap Allocations in Stream Processing and Data-Flow Platforms M. Reza HoseinyFarahabady*, Ali Jannesari†, Zahir Tari‡, Javid Taheri§, Albert Y. Zomaya¶ *†‡¶The University of Sydney, Center for Distributed & High Performance Computing School of Computer Science, New South Wales, Australia †Department of Computer Science, Iowa State University, IA, USA ‡RMIT University, School of Science, Melbourne, VIC, Australia §Karlstad University, Sweden Email: *reza.hoseiny@sydney.edu.au, †jannesar@iastate.edu, ‡zahir.tari@rmit.edu.au, §javid.taheri@kau.se, ¶albert.zomaya@sydney.edu.au Abstract—This paper focuses on Timely dataflow programming model for processing streams of data. We propose a technique to define CPU resource allocation (i.e., CPU capping) with the goal to improve response time latency in such type of applications with different quality of service (QoS) level, as they are concurrently running in a shared multi-core computing system with unknown and volatile demand. The proposed solution predicts the expected performance of the underlying platform using an online approach based on queuing theory and adjusts the corrections required in CPU allocation to achieve the most optimized performance. The experimental results confirm that measured performance of the proposed model is highly accurate while it takes into account the percentiles on the QoS metrics. The theoretical model used for elastic allocation of CPU share in the target platform takes advantage of design principals in model predictive control theory and dynamic programming to solve an optimization problem. While the prediction module in the proposed algorithm tries to predict the temporal changes in the arrival rate of each data flow, the optimization module uses a system model to estimate the interference among collocated applications by continuously monitoring the available CPU utilization in individual nodes along with the number of outstanding messages in every intermediate buffer of all TDF applications. The optimization module eventually performs a cost-benefit analysis to mitigate the total amount of QoS violation incidents by assigning the limited CPU shares among collocated applications. The proposed algorithm is robust (i.e., its worst-case output is guaranteed for arbitrarily volatile incoming demand coming from different data streams), and if the demand volatility is not large, the output is optimal, too. Its implementation is done using the TDF framework in Rust for distributed and shared memory architectures. The experimental results show that the proposed algorithm reduces the average and p99 latency of delay-sensitive applications by 21% and 31.8%, respectively, while can reduce the amount of QoS violation incidents by 98% on average. Index Terms—Dynamic CPU Resource Allocation, Timely Data-Flow Architecture, Scalable Data-Stream Processing I. INTRODUCTION As a recent big data technology, stream processing is a computer programming paradigm that allows some applications to perform users queries over continuous data stream within a short time period as the corresponding events are occurring. There are multiple cases1 in which the value of an organization comes from analyzing, understanding, and responding to its data while it degrades with time. In such environments, it is critical to have instant access to the analyzed data for taking timely and correct actions. While an organization might be tempted to use traditional databases and batch processing technologies, such as Map-Reduce programming model or Apache Hadoop, the most suitable existing tool for quickly responding to generated data and handle use cases which are requiring immediate actions needs to be found in streaming analytics engines. In a sophisticated platform, stream processing applications can use multiple computational units as a form of parallel/concurrent processing to be executed on each element in the data stream. Timely dataflow is a general-purpose paradigm for implementing distributed streaming computations that may composed of several nested iterative computational blocks to be continuously run over incoming data [1]. The framework arose from a work carried out at Microsoft Research in 2013, where a group of researchers worked in creating a method to structure data processing computations across scalable and distributed platforms [1]. Unlike other streaming platform engines, such as MapReduce [2], Apache Flink [3], Apache Spark [4], Google MillWheel [5], Microsoft Sonora [6], and Apache Storm [7], the timely dataflow (TDF) paradigm can provide all aspects of expressive computations, iterative computations (with possibly many parallel tasks), and high performance at the same time [1]. Particularly, existing platforms only allow developers to write either restricted applications for parallel execution or, if they support expressive computations, it would result in an inefficient execution due to the synchronous mechanism to be manually regulated among parallel threads [8], [9]. On the contrary, the TDF paradigm provides some internal mechanisms for coordinating the fine-grained synchronous execution of parallel tasks. Such a mechanism is implemented using a logical time-stamp model attached to each data element. It enables the new paradigm to support developing applications that comprise of multiple stateful, iterative and incremental computations [1]. In this paper, we consider a class of dynamic CPU allocation problem as follows. We are given a set of TDF applications, 1such as stock markets, manufacturers, patient monitoring, and surveillance each with a predefined maximum tolerable delay to accomplish its computation over incoming data. Each TDF application receives multiple streaming data-flows over time, each with a rate that is unknown to the service provider and can randomly change over time. A CPU allocation is an assignment of core capacity in the target system to each application by applying specific policies (e.g., prioritization, defining the CPU shares, CPU reservations, and limiting some settings if they are supported in the operating system layer). For example, using Linux cgroup, one can limit the access of a given process to a single CPU for 0.2 seconds in a 1 second window. We developed a low-overhead Model Predictive Control (MPC) algorithm for the dynamic CPU allocation problem of disparate TDF applications, each with its own predefined performance level. The main feature of the proposed algorithm is considering QoS enforcement levels and continuously monitoring of available CPU share in each node when making CPU cap decisions. At each controlling interval, the algorithm estimates two important metrics of buffer length per each computational unit for every TDF application and the average arrival rate of incoming flow to each buffer. The controller uses a light-weight mathematical model for such estimations as the actual measurement of these parameters is prohibitively expensive to be implemented in a real platform. When designing the CPU cap controller, both estimation values are considered to be inaccurate (model noise). The controller uses an optimization module to dynamically find a CPU cap decision that minimizes the amount of QoS violation incidents among all consolidated TDF applications over a finite-time horizon. The main reason that a service provider uses a consolidation strategy is to increase the utilization level of computing nodes. However, there is a consequential price to pay for using such consolidation strategies, particularly when applications fiercely compete with each other to access the CPU time of a server node [10], [11]. Such a competition can cause a significant performance degradation to be experienced by almost all consolidated applications in a shared environment [12], which is highly undesirable in practical situations, particularly when there are real-time processing requirements. We evaluated the performance of the proposed solution against a number of empirical resource allocation strategies, including weighted round-robin, fixed priority scheduling, and Class-Based Weighted Fair Queueing (CFWFQ). Experiments have been conducted by running iterative generic-join algorithm, which is developed using TDF paradigm, under various workload intensities (with arrival rates of up to 5000 data items per second) with respect to two major performance metrics of response time and QoS violation rate. The experimental results confirm the effectiveness of proposed algorithm under heavy workload conditions. Particularly, the controller can decrease the average and p99 latency of applications in the highest QoS class up to 21% and 31.8% during the peak periods in comparison with the output of CBWFQ policy (which achieves the best result among others). The rest of this paper is organized as follows. Section II concisely provides the reader with essential background needed to understand TDF programming model. Section III gives insights into the proposed controller. Section IV summarizes the experimental evaluation results, followed by a comparison with related work which is presented in Section V. Finally, Section VI draws some final conclusions. II. BACKGROUND Designing scalable and fault-tolerant distributed systems for running parallel programs to process streaming data has been receiving increasing attention. This includes dealing with the upcoming issues in the processing of data-flow in near real-time fashion. We provide brief background information about the core concepts of a recent model on this domain, called timely data-flow (TDF) which is first introduced by Microsoft researchers in 2013 [1], can be best described as a run-time paradigm for implementing and executing low-latency cyclic data-flow computational applications. Its promising feature lies in its capability to scale the same application up from a single thread on an ordinary computer to distributed execution across a cluster of hundred server nodes [13]. Unlike other distributed stream-based data processing platforms, such as MapReduce [2], Apache Spark [4], and Apache Storm [7] which require explicit synchronous and communication methods among processes and threads to provide efficient execution of parallel programs, the TDF provides developers with both high-level expressive computation and high performance in which they can implement sophisticated streaming computations with iterative control flow. The TDF kernel, with the compiler assistance, can deliver an executable with only a minimum synchronization overhead among processes and threads [13]. The new paradigm of data-parallel dataflow targets a certain class of streaming processing problems in which each computational operator (such as filter, map, exchange, join, and reduce) can fragment their input data elements between a number of independent working threads. The instructive rules inside each thread is elicited from a set of logic directives which are defined in the form of iconic closures (or methods) [13]. TDF platform can effectively assist developers to describe the entire computation/application as a set of instructions to continuously run as data-flow computations. Such a high-level expressiveness can immensely simplify developing and testing applications which consist of several stages in the traditional parallel programming model, such as MPI2, that requires developer to be responsible for handling all forms of communication patterns and/or explicit synchronization mechanisms among concurrent/parallel threads and processors. By capturing the instructions for the expressive computation, the TDF engine can seamlessly execute the same application, as a form of compiled executable file, with multiple working threads or processors on a single computer node, or by launching several threads/processors across a cluster with multiple nodes. 2Message Passing Interface Another key aspect of TDF programming model is its ability to perform iterative, stateful, and incremental computations over a continues stream of data. To achieve this, the TDF paradigm introduces a new form of logical timestamp which is attached to each data element as a lightweight mechanism for supporting parallel, iterative, and incremental processing in a distributed environment. By observing such time stamps, each parallel worker knows exactly the number of outstanding data items that are still live – and might be running in another worker reside in a remote computer – and need to be delivered for further processing in a future time [1]. The DCG structure can consist of three system-provided vertices, namely loop ingress, loop egress, and loop feedback to construct a computation which is occurred within any nested loop context. Particularly, TDF offers a set of controlling vertices that specify the organization of the computation in a single loop context and to embrace the three advantages of prevalent models developed in the past for processing of large amount of streaming data. To create a TDF application, the developer team needs to express the entire computational units by precisely defining a directed cyclic graph (DCG) as the underlying dataflow graph which describes how the data flows from and to each operator. Correspondingly, every component within a nested loop needs to include at least one loop feedback vertex. The developers have to obey this structure in order to introduce any loop context if exists. The structural graph allows the TDF kernel to efficiently track the set of data records that might possibly flow throughout the computational components at any given time. Each computational unit may contain arbitrary code and methods to modify the state of input data messages. Such units can be used to exchange logically time-stamped messages along edges to/from other component using some user-defined stateful operators. III. THE DYNAMIC CPU-CAP CONTROLLER This section provides details of the proposed CPU cap controller in a TDF platform. The problem is becoming immensely challenging when multiple TDF applications with different QoS enforcement levels concurrently run in a distributed platform with shared computing resources. The QoS requirements are normally expressed as a set of performance specifications that specifies the quality of service to be fulfilled. Nevertheless, translating the QoS requirements, e.g., the average response time, to an accurate resource allocation decision, such as CPU cap, is not trivial. A static strategy on the average response time, to an accurate resource allocation decision, such as CPU cap, is not trivial. A static strategy to perform iterative, stateful, and incremental computations over a continues stream of data. To achieve this, the TDF paradigm introduces a new form of logical timestamp which is attached to each data element as a lightweight mechanism for supporting parallel, iterative, and incremental processing in a distributed environment. By observing such time stamps, each parallel worker knows exactly the number of outstanding data items that are still live – and might be running in another worker reside in a remote computer – and need to be delivered for further processing in a future time [1]. The idea is to dynamically postpone the decision about CPU cap adjustment for each worker to the run-time, where the controller can estimate the following performance metrics: (1) the available CPU capacity in each machine, and (2) the QoS violation rate per TDF application. To estimate the average end-to-end delay that the execution of every query/operation on an incoming streaming might last, we leverage the fact that measuring the number of non-processed messages in the buffer is a good approximation of the average queuing time of the associated computational unit, hence, its response time. The proposed solution predicts the future arrival rate of data-flow per application over a finite-time horizon, and then dynamically makes CPU cap decisions based on both the current and the predicted future states. We apply the design principals of the model predictive control theory to provide a robust performance despite the system modeling errors and inaccuracies in the prediction of incoming traffic rates. A model predictive controller solves an open-loop optimal control problem to obtain a sequence of actions to be applied across the underlying system at each sampling time, denoted by \( \kappa \in \{ T_0, T_1, \cdots \} \). Such a calculation is continuously repeated at the next sampling time through rearranged horizons [18]. At each \( T_k = T_{k-1} + \Delta T \), where \( \Delta T \) is a fixed length, the controller measures a set of performance metrics, denoted by \( y_{\tau} \), to determine a vector of tracking errors by comparing each metric with a corresponding envisioned value (also known as reference trajectory or set-point and denoted by \( r_{\tau} \)). Reference trajectory defines the desirable value for the target performance metrics along which the underlying system needs to follow after a disturbance occurs. The controller needs to retain the error values, computed as \( e_{\tau} = y_{\tau} - r_{\tau} \), within an acceptable range. The controllable variable is the amount of CPU share assigned to each computational unit, denoted by \( C_{i,\tau} \), across the entire platform. The controller adopts a simple technique to gradually diminish the value of \( e_{\tau} \) within the upcoming sampling epochs, i.e., it is not necessary for the underlying system to be driven back to the set-point reference trajectory immediately. We adjust the CPU cap in such a way that the error value in the next \( \tau' \) steps exponentially vanishes. This can be achieved by applying the controlling actions in several steps instead of enforcing the complete decision once into the system. Formally, using a formula like \( e_{\tau+\tau'} = e^{-\tau' \Delta T/\tilde{T_{ref}}/e_{\tau}} \), where \( \Delta T \) is the sampling interval, and \( \tilde{T_{ref}} \) is a metric called response speed factor, which defines a mechanism for adjusting the maximum number of steps that the system is allowed to fade the error away. In this formula, \( \Delta T/\tilde{T_{ref}} \) can mitigate any negative impact which is caused by the inaccuracy in either the predic- tion or in the developed system models [19]. For example, by setting this ratio to 1/3, the controller forces the output vector (y) to smoothly converge toward the reference trajectory in the upcoming three epochs without causing a large over- or under-shooting from the desirable output value. The controller has an internal system model to predict the future behavior over a predefined prediction horizon. At any given time \( \tau \), the controller uses two measured performance metrics of (1) buffer length of each computational unit, denoted by \( |B_{\tau}| \), and (2) the average arrival rate to such buffers. The controller also exempts the monitoring module the need for gathering arrival flow rates of data items to each buffer or the need to estimate the required workload amount of each computational unit (i.e., the required CPU share), which is a restraint factor imposed by some previous works e.g., [20]. Such monitoring activities are prohibitively expensive operations to be implemented in a real-time system. **Semantic of QoS assurance.** We assume that there are exactly \( Q \) different QoS classes that the application owners can choose from. A QoS class is effectively a value pair, denoted by \( (\omega_{q,m}, \nu_{q,\Delta T}) \), where \( \omega \) is the maximum delay that an application in class \( q \) can tolerate to collect the result of a query over some incoming data elements. Specifically, \( \nu_{q,\Delta T} \) reflects the acceptable upper bound for the percentage of violation experienced by each application in class \( q \) during an arbitrary interval of size \( \Delta T \). For example, in a scenario with three QoS classes (\( |Q| = 3 \)) and the upper bound values of \( \nu_{q=1,3} \in \{0.99, 0.85, 0.65\} \), the overall delay for collecting the result of a query for applications in the first QoS class needs to not be taken longer than \( \omega_{1}^{*} \) for 99% of the incoming data flows during any arbitrary interval. Otherwise, it is considered as a QoS violation incident. **System model.** To control the buffer length of each computational unit, we use a well-known formula in “queuing theory”, called Allen and Cunneen approximation of \( G/G/M \) queue [21], which assumes general arrivals and general service patterns for a queuing system. Such an approximation imposes a low computational overhead to predict the average waiting time of non-processed data elements (\( W_{3,1} \)) in a \( G/G/M \) queue. It can be formulated as follows: \[ \hat{W}_{M} = \frac{P_{cb,M}}{\mu M (1 - \rho)} \left( \frac{C_{D}^{2} + C_{S}^{2}}{2} \right), \] where \( C_{D} = \sigma_{D}/E_{D} \) is the coefficient of variation for inter-arrival time, \( C_{S} = \sigma_{S}/E_{S} \) represents the coefficient of variation for service time, and the term \( \frac{C_{D}^{2} + C_{S}^{2}}{2} \) is often referred as the stochastic variability of the queue. The term \( P_{cb,M} \) reflects the probability that all workers in the entire queuing system are busy (i.e., the waiting time of a just newly arrived data element is above zero) [21]. It is worth mentioning that, although the Allen-Cunneen formula was originally evolved using some computational-based estimation techniques without any formal proof, it often gives a highly-accurate approximation to the average waiting time of customers in a general \( G/G/M \) queue. As shown by Tanner in [22], its obtained value is within 10% of the actual values in most scenarios. **Prediction module.** This module provides an estimation for the incoming traffic rates for every computational unit which is directly connected to an external source. The result is used by optimization module for the final decision of resource sharing. Particularly, at any sampling interval of \( \tau \) and for any data-flow of \( s \), the prediction module estimates the next arrival rate values, denoted by \( \lambda_{s+\kappa,s} \) of such data-flow, across the next \( \kappa > 0 \) steps. If the probability distribution of \( \lambda \) is known in advance, we can simply apply the well-established stochastic techniques over the recent observations to project the future values. Otherwise, an estimation model such as the time-series analysis [23], Kalman filter [24], or auto regressive integrated moving average (ARIMA) model [25]) can be used. We use ARIMA model for the scope of this study which is an effective tool to forecast a time series using its past observations. **Optimization module.** The controller determines the possibility of satisfying the total CPU cap requests collected from all computational units by summing them up and comparing with the total available CPU capacity in the cluster. In a multi-class service, the controller needs to allocate the CPU cap shares among different QoS classes appropriate to the performance target as enforced by each class. In case of CPU scarcity, however, the optimization module performs a cost-benefit analysis to maximize the best interest of service provider (by allocating CPU share to applications with the highest priority). The reward function serves as the gain of service provider when satisfying the requested CPU share of applications belonging to a particular QoS class. We model this allocation problem as a classic discrete budgeting problem which has a fast solution based on dynamic programming [26]. Such a model allows the controller to prioritize demands coming from applications with the highest QoS requirements aiming to meet the requested performance specifications in case of limited CPU share. In the classic discrete budgeting problem, it is assumed that a project manager needs to allocate a budget of size \( R \) to a series of projects, where each allocation can yield to a certain amount of profit for the firm. For our problem, we use the notation of \( C_{c_{i},\tau}^{*} \) to indicate the desirable CPU cap demanded by a computational unit \( c_{i} \) at any given time \( \tau \), and also assume that the symbol \( R_{\tau} \) denotes the whole available computing capacity in the cluster. Accordingly, we can define a contribution reward function, denoted by \( C_{c_{i}}(r_{c_{i}}) \), to represent a reward received by the service provider if it assigns \( r_{c_{i}} \) amount of CPU to the computational unit \( c_{i} \). One possibility to define a reward function is formulated as follows: \[ C_{c_{i}}(r_{c_{i}}) = I(q_{c_{i}}) \times (r_{c_{i}} - C_{c_{i}}^{*}), \] where \( q_{c_{i}} \) represents the QoS class that each \( c_{i} \) belongs to, and \( I(q_{c_{i}}) \) is a constant weight representing the importance of each QoS class. A very common model for discrimination in a multi-class platform is the relative differentiated service model proposed by [27]. Based on this model, a simple policy specifies that the relative importance of each QoS class must relate to the desired target performance, i.e., \( T_{q_i} = r_{q_i} / \left( \sum_{i} r_{q_i} \right) \), where \( Q \) denotes the number of QoS classes and \( r_{q_i} \) is the desired performance target of each application in class \( q_i \). This metric determines how each QoS class is performing relative to other classes. In case of CPU scarcity, we need to maximize the predefined reward function as follows: \[ \max_r \sum_{c_i} C_{c_i}(r_{c_i}), \] subject to the clear constraints of \( r_{c_i} \leq C_{c_i}^* \) and \( \sum r_{c_i} = R_r \). We use a method based on dynamic programming to find a fast solution for the aforementioned optimization problem. Assuming that \( r_{c_i} > 0 \) can only take discrete values, such as \{10\% \ldots 100\%\} of a CPU core capacity, we first find the partial contribution of having exactly \( R_r \) CPU capacity to be allocated to \( c_i \)'s where \( i \geq \kappa \). We denote such partial assignment by \( V_\kappa(R_r) \). By recursively solving the following Bellman’s equation, we can exactly find the optimal value for Equation (3). \[ V_\kappa(R_r) = \max_{0 \leq r \leq R_r} \left( C_{c_\kappa}(r_{c_\kappa}) + V_{\kappa+1}(R_r - r_{c_k}) \right), \] where the initial step is \( V_0(R) = \max_{0 \leq r_{c_n} \leq R} C_{c_n}(r_{c_n}) \), \( \forall 0 \leq R \leq R_r \), and \( n \) denotes the total number of computational units (see [26] for a formal proof). **Micro-architecture level interference.** While workload consolidation remains as one the best solution to cope with the problem of low resource utilization in a large-scale data-center, the contention issue among consolidated workloads to obtain CPU cache or memory bandwidth remains a major impediment for devising an effective consolidation method [28]. Some authors suggested that the micro-architecture level interference among collocated applications can be detected by performing some forms of off-line profiling techniques over the target applications [32]. We also realize that applying such off-line profiling techniques exhibits at least two major shortcomings, as also reported by previous studies such as [33]. First, it is not always practical to perform such an off-line profiling over the submitted applications in real scenarios, particularly if they are constantly submitted by the end-users. Second, the traits of a single application can significantly change over the execution time, which makes the result obtained by performing an off-line profiling useless for live consolidation decision. We try a different strategy, based on a method initially proposed by [34], for quantifying the slowdown level due to the interference in the micro-architecture layer. Based on this strategy, the impact of workloads’ contention on both last level cache and memory bandwidth can be identified when there is any abnormal rise in the memory bandwidth utilization. The memory bandwidth utilization level can be calculated by analyzing two standard hardware events of UNC_NORMAL_READS, as an indicator of memory reads, and UNC_WRITE, as an indicator of memory writes. The CPU cap controller avoid to assign a new application to a host which currently experiences a level of memory bandwidth utilization higher than a threshold value (set to 80\% in our experiments). **IV. Experimental Evaluation** We performed a series of experiments using synthetic large-scale data flows over an implementation of timely data-flow framework written in Rust [13] to validate the proposed CPU cap algorithm. The main goal is to examine the adaptive behavior of the autonomic controller under sudden changes in the incoming traffic rate to each data flow. Its effectiveness is carried out pertaining to: (i) the average and 99\% percentile of processing latency as experienced by each data flow, (ii) the amount of QoS violations occurring in the entire system, and (iii) the robustness of the proposed controller against errors in the system model, i.e., Eq. (1), or in the prediction module. We compared the performance results of the proposed solution against the ones obtained by three empirical heuristics implemented in most of commercial packages, namely Weighted Round-Robin (WRR), Fixed Priority Scheduling (FPS), and Class-Based Weighted Fair Queueing (CFWFQ). The WRR heuristic uses a round-robin policy to evenly balance the incoming traffic among the working threads or processes. According to WRR, the number of workers per each QoS class is fixed to a value which is proportional to the priority and the total number of QoS classes in the platform. The scheduler of FPS assigns a fixed rank to each application and then sorts them in a ready queue in order of their priorities. So, applications with lower-priority can use CPU share only after all higher-priority applications have already finished their operations. In CBFQ, the scheduler creates several classes of ready queues, each of which has its own buffer to host applications based on the QoS class to which the application belongs. Each ready queue receives a minimum reserved CPU share. Further, an application can use more CPU shares if there is any unclaimed share by other classes. The reported experimental results have been performed in a local cluster consisting of sixteen server nodes with a total 32 logical cores. Each machine is equipped with 8 GB of main memory and four 2.40-GHz CPU cores. We use an approach firstly proposed by authors in [17] to translate the output of optimization solver, i.e., Eq. (3), to the amount of CPU core utilization in each server node. We created \( m \in \{150, 250, 350, 450\} \) TDF applications, each consists of three computational units to perform iterative generic-join algorithm. We assign each application to one of the three possible QoS classes. The QoS parameter for each class is set to \( V \in \{0.99, 0.85, 0.75\} \). The generation rate of incoming data for each application is chosen according to a Poisson distribution with an average rate parameter fixed to 5,000 data items per second. We varied the number of entangled relations in each scenario, i.e., \( g = |\Phi| \), from three to five (which are ordered in Figure 1 from left to right). We also defined a density factor, denoted by \( \kappa \), which indicates the ratio of total number of records to the total number of distinct values each relation. In other words, if each relation is considered as a graph \( G = (V, E) \), the density factor can be seen as \( \kappa = \frac{|E|}{|V|} \). We increased the density factor from one to three to create different scenarios (which are ordered from top to bottom in Figure 1). The larger value of $\varrho$, the higher number of iteration to be performed by generic-join algorithm. Likewise, the higher value of $\kappa$, the larger amount of data records needs to be processed by the algorithm. **Result Summary.** Figure 1 depicts the average delay of generic-join algorithm implemented in TDF platform in multiple scenarios. The algorithm performs over data-flows with different parameters of $\kappa$ and $\varrho$ with the total input size of 50 million raw data items. The results are only shown for applications belonging to the highest QoS class. The horizontal axis represents the total number of streaming applications used in each scenario (increasing from $N = 150$ to 450). We observed that the choice of CPU allocation policy can significantly affect the average processing time of join algorithm, from 21% (for $\kappa = 1$ and $\varrho = 3$) up to 140% (for $\kappa = 3$ and $\varrho = 5$). Notably, such discrepancy in performance improvement is more significant in scenarios with more complex iterative operations and more voluminous data records, i.e., higher $\kappa$ and $\varrho$. In addition, the measurement of cluster-wide utilization in all CPU cores confirms that the proposed control strategy can keep the utilization level of CPU cores between 55% and 90% when the workload demand is high. While the average CPU utilization when applying CBWFQ policy, which shows the best performance among others, is up to 73%. Overall, the proposed controller enhances the average utilization of all CPU cores by 28.3% (max 46.9%) in comparison with the best outcome of other heuristics. Table I shows the $99^{th}$ percentile latency and the average reduction in QoS violation incidents as experienced by applications in different QoS classes under heavy workload (i.e., when $\kappa = 3$ and $\varrho = 5$). During this experiment, the available CPU capacity in the entire platform was not enough to fully satisfy the entire demands requested by all applications. Therefore, the controller uses the cost/benefit analysis to trade-off among the QoS violation rates to prioritize the performance level of applications belonging to the highest QoS class. As a result, the controller assigns more CPU share to applications in QoS1 and QoS2 to keep their latency close to the target performance metric. The improvement in reducing the number of violation incidents per QoS class is listed in the last column of Table I. **Computational time overheads.** The execution time to find an optimal solution using the proposed dynamic programming method remains below 0.12 milliseconds using a C implementation for a scenario with 450 TDF applications across a cluster with eight machines and with the sampling period of 1 second. As the running time grows linearly with regard to the number of applications and the total number of nodes, it can be considered as a practical solution in a cluster with hundreds of nodes running thousands of applications. **Sensitivity analysis.** We conducted additional experiments to measure the sensitivity and robustness of the controller with regards to the accuracy in the prediction model, particularly if there are inaccuracies in the estimation model. We implemented two mechanisms to increase the robustness of the solution in spite of such noises. First, the controller chooses a response speed factor strictly greater than one, i.e., $\Delta T/T_{ref} > 1$ (3 in our experiments), which enabled it to gradually apply controlling decisions through multiple steps. Secondly, by using a low-pass filter, the controller reduced the uncertain noise associated with the prediction model. Table II represents the sensitivity of the result as there are injected inaccuracies in the estimation tool. We conducted the sensitivity analysis as follows. Starting first with a fully accurate estimation model (zero error), we deliberately increased the error of estimation model up to 90%. We then measured the relative impact of the induced error by comparing the output of the result with the original solution of the controller when the error is zero. We define a sensitivity coefficient metric ($\psi$) that indicates the quality of the result achieved by the proposed controller (with regard to a particular performance metric $x$) when the estimation of the input parameter $y$ has an error of ![Figure 1: Improvement in the average latency of applications belonging to the highest QoS class when the proposed solution for CPU cap management is employed. The scenarios are differentiated with respect to the density factor and the degree of entanglement among relations. The x and the y axes represent the total number of TDF applications and the mean response-time for finishing a join operation over input data, respectively.](image-url) \[ \epsilon, \quad \psi_{x, \epsilon} = |z(x) - z(x + \epsilon)|/|z(x)|. \] The experimental results confirm that even an error of 90\% has a light negative influence in the solution quality (4.1\% for the response time and 3.5\% for the QoS violation rate). **Skewness.** Workload skewness can be defined as the uneven distribution of data/work across parallel/distributed workers. Such an imbalance can potentially lead to lingering processing and under-utilization of the computing resources [35]. We measured the size of intermediate buffers on every computational units during its course of execution under various volume of incoming raw data to observe the workload skewness caused by applying different strategies. We define the imbalance factor as the ratio of maximal buffer size on task instances among all TDF applications to the average buffer size of each individual application (i.e., \( \gamma = \frac{\text{max}\, L_c}{\text{avg}\, L_c} \)), which \( L_c \) denotes the buffer length of a given computational unit. The results of load imbalance factor achieved by different strategies are shown in Figure 2. The experimental result confirms that workload imbalance is highly relevant to the operated CPU cap strategy, and such an imbalance problem is exacerbated at scale as the size of incoming data grows. Unlike other heuristics which strive to distribute the number of tuples among workers as evenly as possible without considering the run-time conditions and accumulated loads in each buffer, our solution exploits the skew in the intermediate buffers by dynamically lowering/increasing the assigned CPU cap to the corresponding computational unit. Experimental results also confirm the effectiveness of our dynamic solution which does not need a global coordination among working threads. Figure 2 depicts the load imbalance factor as a percentage achieved by four algorithms in different scenarios as the total size of incoming data to be processed using TDF join applications grows. The proposed controller can mitigate the workload imbalance by up to 49\% for TDF applications in the highest priority class in comparison with the best outcome of other heuristics, which is achieved by CBWFQ. **V. RELATED WORK** Distributed data-flow processing has been effectively employed in the field of big data mining where algorithms show iterative nature. Naiad [1] has been designed as a distributed system for running parallel and iterative operations over both batch and streaming data-flows. In Naiad, each message has a logical time-stamps that allows the underlying system figuring out the right order and the associated priority of each message. However, the thread-level elasticity in Naiad system is not fully supported. **VI. CONCLUSIONS** In this paper, we proposed a low-overhead feedback-driven resource allocation mechanism that dynamically adjusts the CPU cap share for every co-running TDF applications. It comprises of a model predictive controller that employs an optimization module to fulfill application’s quality of service enforcement. The effectiveness of proposed solution has demonstrated an average improvement of average and 99th latency of iterative join applications in the highest QoS class by 21% and 31.8%, respectively in comparison with the best outcome of three other heuristics (e.g., Weighted Round-Robin (WRR), Fixed Priority Scheduling (FPS), and Class-Based Weighted Fair Queuing (CFWFQ)). It also reduces the QoS violation incidents on average by 98% for applications in the highest QoS class comparing to the result of CBWFQ. As a future work, we will investigate the theoretical properties of the robustness of the proposed controller to better understand the optimality of the approach under general assumptions about workload fluctuations. ACKNOWLEDGMENT We would like to acknowledge the support of the Australian Research Council Linkage-Industry Grant (LP150101213). Also, we would like to extend our thanks to ATMC Pty Ltd for their support of this work as the industry partner for LP1501013. REFERENCES
{"Source-Url": "https://www.cs.iastate.edu/swapp/files/page/files/cr-nca2019.pdf", "len_cl100k_base": 8365, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 29676, "total-output-tokens": 9794, "length": "2e13", "weborganizer": {"__label__adult": 0.0003352165222167969, "__label__art_design": 0.0004730224609375, "__label__crime_law": 0.0003757476806640625, "__label__education_jobs": 0.0008497238159179688, "__label__entertainment": 0.00014102458953857422, "__label__fashion_beauty": 0.00017893314361572266, "__label__finance_business": 0.0005254745483398438, "__label__food_dining": 0.0004284381866455078, "__label__games": 0.00066375732421875, "__label__hardware": 0.00333404541015625, "__label__health": 0.0010204315185546875, "__label__history": 0.00043392181396484375, "__label__home_hobbies": 0.00016558170318603516, "__label__industrial": 0.0008516311645507812, "__label__literature": 0.00028824806213378906, "__label__politics": 0.0003178119659423828, "__label__religion": 0.0005311965942382812, "__label__science_tech": 0.445556640625, "__label__social_life": 0.00010722875595092772, "__label__software": 0.019012451171875, "__label__software_dev": 0.52294921875, "__label__sports_fitness": 0.0002815723419189453, "__label__transportation": 0.0008130073547363281, "__label__travel": 0.00026106834411621094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43171, 0.01744]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43171, 0.28521]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43171, 0.90187]], "google_gemma-3-12b-it_contains_pii": [[0, 5629, false], [5629, 11947, null], [11947, 18368, null], [18368, 25180, null], [25180, 31765, null], [31765, 36709, null], [36709, 39713, null], [39713, 43171, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5629, true], [5629, 11947, null], [11947, 18368, null], [18368, 25180, null], [25180, 31765, null], [31765, 36709, null], [36709, 39713, null], [39713, 43171, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43171, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43171, null]], "pdf_page_numbers": [[0, 5629, 1], [5629, 11947, 2], [11947, 18368, 3], [18368, 25180, 4], [25180, 31765, 5], [31765, 36709, 6], [36709, 39713, 7], [39713, 43171, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43171, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
5f0b05ac6c45c063cf9762eb0b27531b0e40020d
[REMOVED]
{"Source-Url": "https://iris.polito.it/retrieve/handle/11583/2650551/e384c42f-17f4-d4b2-e053-9f05fe0a1d67/main.pdf", "len_cl100k_base": 8507, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 39242, "total-output-tokens": 10401, "length": "2e13", "weborganizer": {"__label__adult": 0.0004410743713378906, "__label__art_design": 0.0002970695495605469, "__label__crime_law": 0.0016431808471679688, "__label__education_jobs": 0.00044035911560058594, "__label__entertainment": 6.878376007080078e-05, "__label__fashion_beauty": 0.00016701221466064453, "__label__finance_business": 0.000301361083984375, "__label__food_dining": 0.00033092498779296875, "__label__games": 0.001178741455078125, "__label__hardware": 0.0008234977722167969, "__label__health": 0.0005517005920410156, "__label__history": 0.00018966197967529297, "__label__home_hobbies": 8.600950241088867e-05, "__label__industrial": 0.0003736019134521485, "__label__literature": 0.00024378299713134768, "__label__politics": 0.00028133392333984375, "__label__religion": 0.00032782554626464844, "__label__science_tech": 0.0229034423828125, "__label__social_life": 7.814168930053711e-05, "__label__software": 0.0132293701171875, "__label__software_dev": 0.955078125, "__label__sports_fitness": 0.00026798248291015625, "__label__transportation": 0.0003323554992675781, "__label__travel": 0.00015866756439208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45163, 0.01791]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45163, 0.35338]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45163, 0.92137]], "google_gemma-3-12b-it_contains_pii": [[0, 318, false], [318, 2867, null], [2867, 5059, null], [5059, 8622, null], [8622, 11983, null], [11983, 14748, null], [14748, 17154, null], [17154, 19341, null], [19341, 21251, null], [21251, 23685, null], [23685, 27214, null], [27214, 29760, null], [29760, 32835, null], [32835, 35843, null], [35843, 38701, null], [38701, 41596, null], [41596, 45163, null]], "google_gemma-3-12b-it_is_public_document": [[0, 318, true], [318, 2867, null], [2867, 5059, null], [5059, 8622, null], [8622, 11983, null], [11983, 14748, null], [14748, 17154, null], [17154, 19341, null], [19341, 21251, null], [21251, 23685, null], [23685, 27214, null], [27214, 29760, null], [29760, 32835, null], [32835, 35843, null], [35843, 38701, null], [38701, 41596, null], [41596, 45163, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45163, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45163, null]], "pdf_page_numbers": [[0, 318, 1], [318, 2867, 2], [2867, 5059, 3], [5059, 8622, 4], [8622, 11983, 5], [11983, 14748, 6], [14748, 17154, 7], [17154, 19341, 8], [19341, 21251, 9], [21251, 23685, 10], [23685, 27214, 11], [27214, 29760, 12], [29760, 32835, 13], [32835, 35843, 14], [35843, 38701, 15], [38701, 41596, 16], [41596, 45163, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45163, 0.05164]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ba453933b1ebda12bd652c7eac85de8dcffb66c4
Abstract Event-driven programming (EDP) is the prevalent paradigm for graphical user interfaces, web clients, and it is rapidly gaining importance for server-side and network programming. Central components of EDP are event loops, which act as FIFO queues that are used by processes to store and dispatch messages received from other processes. In this paper we demonstrate that shared event loops are vulnerable to side-channel attacks, where a spy process monitors the loop usage pattern of other processes by enqueueing events and measuring the time it takes for them to be dispatched. Specifically, we exhibit attacks against the two central event loops in Google’s Chrome web browser: that of the I/O thread of the host process, which multiplexes all network events and user actions, and that of the main thread of the renderer processes, which handles rendering and Javascript tasks. For each of these loops, we show how the usage pattern can be monitored with high resolution and low overhead, and how this can be abused for malicious purposes, such as web page identification, user behavior detection, and covert communication. 1 Introduction Event-driven programming (EDP) consists of defining responses to events such as user actions, I/O signals, or messages from other programs. EDP is the prevalent programming paradigm for graphical user interfaces, web clients, and it is rapidly gaining importance for server-side and network programming. For instance, the HTML5 standard [2] mandates that user agents be implemented using EDP, similarly, Node.js, memcached, and Nginx, also rely on EDP. In EDP, each program has an event loop which consists of a FIFO queue and a control process (or thread) that listens to events. Events that arrive are pushed into the queue and are sequentially dispatched by the control process according to a FIFO policy. A key feature of EDP is that high-latency (or blocking) operations, such as database or network requests, can be handled asynchronously: They appear in the queue only as events signaling start and completion, whereas the blocking operation itself is handled elsewhere. In this way EDP achieves the responsiveness and fine-grained concurrency required for modern user interfaces and network servers, without burdening programmers with explicit concurrency control. Figure 1: Shared event loop. A enqueues multiple short tasks and records the time at which each of them is processed. The time difference between two consecutive tasks reveals whether V has posted tasks in-between, and how long they took to execute. In this paper we show that EDP-based systems are susceptible to side-channel attacks. The key observation is that event loops form a resource that can be shared between mutually distrusting programs. Hence, contention of this resource by one program can be observed by the others through variations in the time the control process takes for dispatching their events. Figure 1 illustrates such a scenario for a loop that is shared between an attacker A and a victim V. Attacks based on observable contention of shared resources have a long history [25] and an active present [8, 27, 37]; however, attacks against shared event loops have so far only been considered from a theoretical point of view [22]. Here, we perform the first attacks against real EDP-based systems. Specifically, we target shared event loops in the two central processes of Google’s Chrome we build infrastructure that enables us to spy on both loops from a malicious HTML page. This is facilitated by the asynchronous programming model used in both Chrome and Javascript. Asynchronous function calls trigger new tasks that are appended to the same queue, in contrast to synchronous calls which are simply pushed onto the current task’s call stack and executed without preemption, blocking the loop. - For the event loop of the renderer we rely on the `postMessage` API, which is a Javascript feature for cross-window communication based on asynchronous callbacks. By posting messages to ourselves we can monitor the event loop with a resolution of 25 µs, with only one task in the loop at each point in time. - For the event loop of the host process we rely on two different mechanisms: network requests to non-routable IP addresses, which enter the loop and abort very quickly, providing a resolution of 500 µs; and SharedWorkers, whose messages pass through the event loop of the host process, providing a resolution of 100 µs. We use the information obtained using these techniques in three different attacks: 1. We show how event delays during the loading phase, corresponding to resource requests, parsing, rendering and Javascript execution, can be used to uniquely identify a web page. Figure 2 visualizes this effect using three representative web pages. While this attack shares the goal with the Memento attack [21], the channels are quite different: First, in contrast to Memento, we find that the relative ordering of events is necessary for successful classification, which motivates the use of dynamic time warping as a distance measure. Second, we show that page identification through the event loop requires only minimal training: we achieve recognition rates of up to 75% and 23% for the event loops of the renderer and host processes, respectively, for 500 main pages from Alexa’s Top sites. These rates are obtained using only one sample of each page for the training phase. 2. We illustrate how user actions in cross-origin pages can be detected based on the delays they introduce in the event loop. In particular, we mount an attack against Google OAuth login forms, in which we measure the time between keystrokes while the user is typing a password. The timing measurements we obtain from the event loop are significantly less noisy or require less privileges than from other channels [20, 38, 18]. 3. We demonstrate that shared event loops can be used to transmit information between cross-origin pages. Specifically, we implement a covert channel with a bandwidth of 200 bit/s through the renderer’s main thread event loop, and another one working cross-processes of 5 bit/s. Our attacks show that event loops can be successfully spied on even with simple means. They work under the assumption that event loops behave as FIFO queues; in reality, however, Chrome’s event loop has a more sophisticated structure, relying on multiple queues and a policy-based scheduler. We believe that this structure can be leveraged for much more powerful attacks in the future. 2 Isolation Policies and Sharing of Event Loops in Chrome In this section we revisit the same origin policy and its variants. We then discuss the relationship of these policies with the Chrome architecture, where we put a special focus on the way in which event loops are shared. 2.1 Same Origin Policy The Same-Origin Policy (SOP) is a central concept in the web security model: The policy restricts scripts on a web page to access data from another page if their origins differ. Two pages have the same origin if protocol, port and host are equal. The demand for flexible cross-origin communication has triggered the introduction of features such as domain relaxation, the postMessage API, Cross-origin Resource Sharing (CORS), Channel Messaging, Suborigins, or the Fetch API. This feature creep comes with an increase in browser complexity and attack surface, which has motivated browser vendors to move towards more robust multi-process architectures. ### 2.2 Overview of the Chrome Architecture The Chrome architecture is segmented into different operating system processes. The rationale for this segmentation is twofold: to isolate web content from the host [6], and to support the enforcement of origin policies by means of the OS [30]. For achieving this segmentation, Chrome relies on two processes: - **HOST PROCESS** - Main Thread - IO Thread - **RENDERER A** - MainThread - IOChildThread - CompositorThread - **RENDERER B** - MainThread - IOChildThread - CompositorThread ![Figure 3: Overview of Chrome's architecture.](image) The **host process** runs the top-level browser window. It has access to system resources such as network, file system, UI events, etc., which it manages on behalf of the unprivileged renderer processes. The host process runs several threads; the most relevant ones are: - the **CrBrowserMain** thread, which handles, e.g., user interaction events, and - the **IOThread**, which handles, e.g., IPC, network stack, and file system. The **renderer processes** are sandboxed processes responsible for parsing, rendering and Javascript execution. Communication with the host process is done via an inter-process communication (IPC) system based on message passing. Each renderer runs several threads; the most relevant ones are: - the **MainThread** where resource parsing, style calculation, layout, painting and non-worker Javascript runs, - the **IOChildThread**, which handles IPC communication with the host process, and - the **CompositorThread**, which improves responsiveness during the rendering phase by allowing the user to scroll and see animations while the main thread is busy, thanks to a snapshot of the page’s state. Each of the threads in the host and renderer processes maintains at least one event loop that is largely a FIFO queue. Inter-thread and inter-process communication are carried out via message passing through these queues. We next discuss scenarios where pages of different origin can share the event loops of host and renderer processes. In Section 3 we show how this sharing can be exploited for eavesdropping. ### 2.3 Sharing in the Renderer Processes Chrome supports different policies that govern how web applications are mapped to renderer processes, and that influence whether or not event loops are shared. The default policy is called **process-per-site-instance**. It requires using a dedicated renderer process for each instance of a site. Here, a site is defined as a registered domain plus a scheme. For example, https://docs.google.com and https://mail.google.com:8080 are from the same site – but not from the same origin, as they differ in subdomain and port. A site instance is a collection of pages from the same site that can obtain references to each other (e.g., one page opened the other in a new window using Javascript). The other supported policies are more permissive. For example, the **process-per-site** policy groups all instances of a site in the same renderer process, trading robustness for a lower memory overhead. The **process-per-tab** policy dedicates one renderer process to each group of script-connected tabs. Finally, the **single-process** policy lets both the host and renderer run within a single OS process (only used for debugging purposes). Even in the restrictive default process-per-site-instance policy, there are some situations that force Chrome to host documents from different sites in the same renderer process, causing them to share the event loop: - Iframes are currently hosted in the same process as their parent. - Renderer-initiated navigations such as link clicks, form submissions, and scripted redirections will reuse the same renderer as the origin page. - When the number of renderer processes exceeds a certain threshold, Chrome starts to reuse existing renderers instead of creating new ones. On (64-bit) OSX and Linux, the threshold for reusing renderers is calculated by splitting half of the physical RAM among the renderers, under the assumption that each consumes 60MB. In our experiments, on a machine with 4 GB of RAM we could spawn 31 new tabs before any renderer was shared, whereas on a machine with 8 GB of RAM we observed a threshold of approximately 70 renderers. There is no apparent grouping policy for the pages that can share a process when this threshold is exceeded, except for tabs in Incognito mode not being mixed up with “normal” tabs. In particular, we do not observe any preference for similar origins, same sites, or secure versus insecure pages. In fact, even filesystem pages (loaded with file://) can co-reside with an arbitrary HTTP site. 2.4 Sharing in the Host Process The Chrome sandbox restricts access of renderers to privileged actions. In particular, renderers have to communicate with the host process for network requests or user input. The corresponding messages of all renderers pass through the event loop of the host process’ I/O thread. We illustrate this communication using two different examples: how user actions flow from the host to the corresponding renderer process, and conversely, how network requests flow from a renderer to the host process. - **UI flow:** User actions such as mouse movements or clicks enter the browser through the main thread of the host process. The host main thread communicates the user event to the corresponding renderer by message passing between their I/O event loops, and the render acknowledges the receipt of this message. Even events with no Javascript listeners occupy the event loop of the renderer’s main thread for a measurable interval. - **Net stack:** Chrome’s net stack is a complex cross-platform network abstraction. Any network request by a renderer is passed to the I/O thread of the host process, which forwards it to a global resource dispatcher that will pass it to a worker to fulfill the request. After the request is done, the response headers are received and sent back to the renderer process, which will respond with an ACK after reading. Finally, the body is received and the corresponding callbacks are triggered. 3 Eavesdropping on Event Loops in Chrome In this section we describe how to violate the SOP by eavesdropping on the event loops of Chrome’s host and renderer processes. For each of these processes, we describe potential threat scenarios and present a simple HTML page executing Javascript that can be used for spying. We then present our monitoring tool to visualize the event loops of the browser. 3.1 The Renderer Process Event Loop 3.1.1 Threat Scenarios There are several scenarios in which an adversary site $A$ can share the event loop of the renderer’s main thread with a victim site $V$. These scenarios are based on Chrome’s policy for mapping sites to renderers, see Section 2.3. We give two examples: - **Malicious advertisement.** In this scenario, $A$ runs as an advertisement iframed in $V$. The SOP protects $V$’s privacy and integrity by logically isolating both execution environments. However, $A$’s iframe is able to execute Javascript on $V$’s event loop, enabling it to gather information about the user behavior in $V$. - **Keylogger.** In this scenario, $A$ pops up a login form to authenticate its users via $V$’s OAuth. Because the operation does not ask for special privileges and the password is never sent to $A$, the victim could trust it and fill the form. Meanwhile, $A$’s page monitors keystroke timings (see Section 4.2), which can be used for recovering user passwords. 3.1.2 Monitoring Techniques To monitor the renderer’s event loop it is sufficient to continuously post asynchronous tasks and measure the time interval between subsequent pairs of events. We measure the monitoring resolution in terms of the interval between two subsequent measurement events on an otherwise empty loop. The most common way of posting asynchronous tasks programmatically in Javascript is `setTimeout`. However, the resolution can be more than 1000 ms for inactive tabs, rendering this approach useless for the purpose of spying. To increase the resolution, we instead use the `postMessage` API for sending asynchronous messages to ourselves. The code in Listing 1 shows how this is achieved. The call to `performance.now()` in line 2 of the function `loop` returns a high-resolution timestamp that is saved as described below. The call to `self.postMessage(0,'*')` in line 3 posts message function loop () { save ( performance . now () ) self . postMessage ( 0 , " * " ) } Listing 1: Javascript code to monitor the main thread’s event loop with the postMessage API. “0” into the renderer’s event loop, where the second argument “*” indicates no restriction on the receiver’s origin. Line 5 registers the function loop as an event listener, which enables it to receive the messages it has posted. This causes loop to recursively post tasks, while keeping the render responsive since other events are still being processed. In order to minimize the noise introduced by the measurement script itself, the function save in line 2 uses a pre-allocated typed array (Float64Array) to store all the timing measurements. Contrary to normal Javascript’s sparse arrays, typed arrays avoid memory re-allocations and thus noisy garbage collection rounds, see below. With that we achieve an average delay between two consecutive tasks of around 25 μs on our target machine. This resolution is sufficient to identify even short events. For example, a single mouse movement event (without explicit event listener) consumes around 100 μs. 3.1.3 Interferences In modern browsers there are several sources of noise that affect measurement precision, beside the obvious effect of the underlying hardware platform and OS. They include: - Just-in-time compilation (JIT). JIT can trigger code optimization or deoptimization, in the case of Chrome by the CrankShaft and Turbofan compilers, at points in time that are hard to predict. For our measurements we rely on a warm-up phase of about 150 ms to obtain fully optimized code. - Garbage collection (GC). In the case of V8, GC includes small collections (so-called scavenges) and major collections. Scavenges are periodical and fast (< 1 ms); but major collections may take > 100 ms, distributed into incremental steps. In our data, scavenges are easily identifiable due to their periodicity, while major collections could be spotted due to their characteristic size. On some browsers, such as Microsoft’s Internet Explorer, GC rounds can be triggered programmatically, which helps to eliminate noise from the measurements enabling more precise attacks [11]. While all of these features reduce the effectiveness of our attacks, it is interesting to think of them as potential side-channels by themselves. For example, observable GC and JIT events can reveal information about a program’s memory and code usage patterns, respectively [29]. 3.2 The Host Process Event Loop 3.2.1 Threat Scenarios The Chrome sandbox ensures that all of the renderer’s network and user interaction events pass through the host process’ I/O event loop, see Section 2.4. We describe two threat scenarios where this could be exploited. - Covert channel. Pages of different origins running in different (disconnected) tabs can use the shared event loop to implement a covert channel, violating the browser’s isolation mechanisms. This will work even if one (or both) pages run in incognito mode. This channel can be used for tracking users across sessions, or to exfiltrate information from suspicious web pages without network traffic. - Fingerprinting. A tab running a rogue page of A can identify which pages are being visited by the user in other tabs by spying on the shared event loop. Detecting the start of a navigation is facilitated by the fact that the I/O thread blocks for a moment when the user types in a URL and presses enter. 3.2.2 Monitoring Techniques There are many ways to post asynchronous tasks into the event loop of the host process; they differ in terms of the resolution with which they enable monitoring the event loop and the overhead they imply. Below we describe two of the techniques we used. Network Requests. The first technique is to use network requests to systematically monitor the event loop of the I/O thread of the host process. A valid network request may take seconds to complete, with only the start and end operations visible in the loop, which provides insufficient resolution for monitoring. To increase the resolution, we make use of non-routable IP addresses. The corresponding requests enter the I/O thread’s event loop, are identified as invalid within the browser, and trigger the callback without any DNS resolution or socket creation. This mechanism provides a monitoring resolution of 500 μs and has the additional benefit of being independent from network noise. Listing 2 shows the code of our monitoring procedure. We rely on the Javascript Fetch API for posting the network requests. The Fetch API provides an interface for fetching resources using promises, which are ideal to manage asynchronous computations thanks to their simple syntax for handling callbacks. In line 2 we request and save a high-resolution timestamp. In line 3 we request a non-routable IP address, and set the rejection callback of the promise to self, to recursively run when the request fails. ```javascript function loop () { save ( performance.now () ) fetch ( new Request ( 'http://0/' ) ). catch ( loop ) } loop () ``` Listing 2: Javascript code to monitor the host’s I/O thread using network requests. **Shared Workers.** The second technique relies on web workers, which is a mechanism for executing Javascript in the background. Web workers that are shared between multiple pages are usually implemented in a dedicated OS process; this means they communicate via IPC and, therefore, can be used to spy on the I/O thread of the host process. This mechanism provides a monitoring resolution of 100µs. Listing 3 shows the code of our worker-based monitoring procedure. The first snippet defines the worker’s job, which consists in replying to each received message. In the second snippet, we register the worker in line 1. In lines 2-7 we record a timestamp and recursively send messages to the worker, analogous to Listing 1. As a result, we measure the round-trip time from the page to the worker, which reflects the congestion in the I/O event loop. Note that one can further increase the measurement resolution by recording the time in each endpoint and merging the results. ```javascript onconnect = function reply(e) { let port = e.ports[0] port.onmessage = function() { port.postMessage(0) } } const w = new SharedWorker('pong.js') function loop () { save ( performance.now () ) w.port.postMessage(0) } w.port.onmessage = loop loop () ``` Listing 3: Javascript code to monitor the host’s I/O thread using SharedWorkers. The first snippet is the worker’s ‘pong.js’ file. Second snippet is the Javascript code that monitors the I/O thread by communicating with the worker. 3.2.3 Interferences There are many different sources of noise and uncertainty in the I/O thread of the host process. The most notable ones include the interleaving with the host’s main thread and the messages from other renderers, but also the GPU process and browser plugins. While these interferences could potentially be exploited as side channels, the noise becomes quickly prohibitive as the loop gets crowded. 3.3 The LoopScan Tool We implement the eavesdropping techniques described in Sections 3.1 and 3.2 in a tool called LoopScan, which enables us to explore the characteristics of the side channel caused by sharing event loops. LoopScan is based on a simple HTML page that monitors the event loops of the host and renderer processes. It relies on the D3.js framework, and provides interactive visualizations with minimap, zooming, and scrolling capabilities, which facilitates the inspection of traces. For example, Figure 8 is based on a screenshot from LoopScan. LoopScan’s functionality is in principle covered by the powerful Chrome Trace Event Profiling Tool (about:tracing) [3], which provides detailed flame graphs for all processes and threads. However, LoopScan has the advantage of delivering more accurate timing information about event-delay traces than the profiler, since loading a page with the Trace Event Profiling tool severely distorts the measurements. LoopScan source is publicly available at https://github.com/cgvwzq/loopscan. 4 Attacks In this section we systematically analyze the side channel caused by sharing event loops in three kinds of attacks: a page identification attack, an attack where we eavesdrop on user actions, and a covert channel attack. For all attacks we spy on the event loops of the renderer and the host processes, as described in Sections 3.1 and 3.2. We performed these attacks over the course of a year, always using the latest stable version of Chrome (ranging from v52-v58). The results we obtain are largely stable across the different versions. 4.1 Page identification We describe how the event-delay trace obtained from spying on event loops can be used for identifying webpages loaded in other tabs. We begin by explaining our data selection and harvesting process and the chosen analysis methods, then we describe our experimental setup and the results we obtain. 4.1.1 Sample Selection We start with the list of Alexa Top 1000 sites, from which we remove duplicates. Here, duplicates are sites that share the subdomain but not the top-level domains (e.g., “google.br” and “google.com”) and that are likely to have similar event-delay traces. From the remaining list, we randomly select 500 sites as our sample set. This reduction facilitates a rigorous exploration of the data and the parameter space. 4.1.2 Data Harvesting We visit each page in the sample set 30 times for both the renderer and the host process, to record traces of event-delays during the loading phase. The event-delay traces for the renderer process consist of 200,000 data items each. On our testing machine, the measurement resolution (i.e. the delay between two subsequent measurement events on an otherwise empty loop) lies at approximately 25 \( \mu s \). That is, each trace captures around 5 seconds (200,000 \( \times 25 \mu s = 5 \) s) of the loading process of a page in the sample set. The event-delay traces for the host process consist of 100,000 data items each. The measurement resolution lies in the range of 80 — 100 \( \mu s \), i.e. each trace captures around 9s of the loading process of a page. We automate the harvesting procedure for the renderer process as follows: 1. Open a new tab via \[ \text{target} = \text{window.open}( ext{URL}, '_\text{blank}'); \] 2. Monitor the event loop until the trace buffer is full 3. Close the tab 4. Send the trace to the server 5. Wait 5 seconds and go to 1 with next URL The harvesting procedure for the host process differs only in that we use the \text{rel}="noopenerr" attribute in order to spawn a new renderer. We conducted measurements on the following three machines: 1. Debian 8.6 with kernel 3.16.0-4-amd64, running on an Intel i5 @ 3.30GHz x 4 with 4 GB of RAM, and Chromium v53; 2. Debian 8.7 with kernel 3.16.0-4-amd64, running on an Intel i5-6500 @ 3.20GHz x 4 with 16 GB of RAM, and Chromium v57; and 3. OSX running on a Macbook Pro 5.5 with Intel Core 2 Duo @ 2.53GHz with 4 GB of RAM, and Chrome v54. We measure the timing on a Chrome instance with two tabs, one for the spy process and the other for the target page. For the renderer process, we gather data on all machines; for the host process on (2) and (3). Overall, we thus obtain five corpora of 15,000 traces each. 4.1.3 Classification Event Delay Histograms. Our first approach is to cluster the observed event delays around \( k \) centers, and to transform each trace into a histogram that represents the number of events that fall into each of the \( k \) classes. We then use the Euclidean distance as a similarity measure on the \( k \)-dimensional signatures. This approach is inspired by the notion of memprints in [21]. It appears to be suitable for classifying event-delay traces obtained from event loops because, for example, static pages with few external resources are more likely to produce long events at the beginning and stabilize soon, whereas pages with Javascript resources and animations are likely to lead to more irregular patterns and produce a larger number of long delays. Unfortunately, our experimental results were discouraging, with less than a 15% of recognition rate in small datasets. Dynamic Time Warping. Our second approach is to maintain temporal information about the observed events. However, the exact moments at which events occur are prone to environmental noise. For example, network delays will influence the duration of network requests and therefore the arrival of events to the event loop. Instead, we focus on the relative ordering of events as a more robust feature for page identification. This motivates the use of dynamic time warping (DTW) [22] as a similarity measure on event-delay traces. DTW is widely used for classifying time series, i.e. sequences of data points taken at successive and equally spaced points in time. DTW represents a notion of distance that considers as “close” time-dependent data of similar shape but different speed, i.e. DTW is robust to horizontal compressions and stretches. This is useful, for example, when one is willing to assign a low distance score to the time series “abc“ and “abbbbc”, insensitive to the prolonged duration of “b”. Formally, DTW compares two time series: a \text{query}, \( X = (x_1, \ldots, x_n) \), and a \text{reference}, \( Y = (y_1, \ldots, y_m) \). For that we use a non-negative distance function \( f(x_i, y_j) \) defined between any pair of elements \( x_i \) and \( y_j \). The goal of DTW is to find a matching of points in \( X \) with points in \( Y \), such that (1) every point is matched, (2) the relative ordering of points in each sequence is preserved (monotonicity), (3) and the cumulative distance (i.e. the sum of the values of \( f \)) over all matching points is minimized. This matching is called a \[ \text{Note that this requires disabling Chrome’s popup blocker from "chrome://settings/content".} \] warping path, and the corresponding distance is the time warping distance \( d(X,Y) \). Figure 4: The path in the upper right square represents the optimal alignment between points in the time series corresponding to 'google.com' (horizontal axis) with points in the time series of 'youtube.com' (vertical axis). Figure 4 visualizes a warping path between the time series corresponding to event-delay traces observed while loading different webpages. 4.1.4 Speed-up Techniques Unfortunately, the time required for computing \( d(X,Y) \) is quadratic in the length of the input sequences and does not scale up to the raw data obtained in our measurements. We rely on two kinds of speed-up techniques, one at the level of the data and the other at the level of the algorithm: At the level of data, we reduce the dimension of our data by applying a basic sampling algorithm: We split the raw trace into groups of measurements corresponding to time intervals of duration \( P \), and replace each of those groups by one representative. This representative can be computed by summing over the group, or by taking its average, maximum or minimum. The \textit{sum} function generally yields the best results among different sampling functions and is the one that we use onwards. Sampling reduces the size of the traces by a factor of \( P/t \), where \( t \) is the average duration of an event delay. Figure 5 shows two plots with the raw data taken from a renderer’s main thread loop, and its corresponding time series obtained after sampling. At the algorithmic level, we use two sets of techniques for pruning the search for the optimal warping path, namely windowing and step patterns [15]. - \textit{Windowing} is a heuristic that enforces a global constraint on the envelope of the warping path. It speeds up DTW but will not find optimal warping paths that lie outside of the envelope. Two well-established constraint regions are the \textit{Sakoe-Chiba band} and the \textit{Itakura parallelogram}, see Figure 6. - \textit{Step patterns} are a heuristic that puts a local constraint on the search for a warping path, in terms of restrictions on its slope. In particular, we rely on three well-known step patterns available in R. Intuitively, the \textit{symmetric1} pattern favors progress close to the diagonal, the \textit{symmetric2} pattern allows for arbitrary compressions and expansions, and the \textit{asymmetric} forces each point in the reference to be used only once. Figure 5: The top figure represents a raw trace of 200,000 time measurements from the renderer’s main thread extracted while loading “google.com”. The bottom figure displays the same data after being converted into a time series with \( P = 20\text{ ms} \), i.e. using only 250 data points. The difference in the height of the peaks is due to the accumulation of small events in the raw data, which are not perceptible in the top figure. Figure 6: A global window constraint defines an envelope limiting the search space for optimal warping paths: (a) Itakura parallelogram, and (b) Sakoe-Chiba band. ### 4.1.5 Parameter tuning The possible configurations of the techniques presented in Section 4.1.4 create a large parameter space, see Table 1 for a summary. ```markdown <table> <thead> <tr> <th>Parameter</th> <th>Values</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>traceDuration</td> <td>1000, 2000, 4000</td> <td>Trace duration (ms)</td> </tr> <tr> <td>P</td> <td>5, 10, 20, 50</td> <td>Sampling interval (ms)</td> </tr> <tr> <td>windowType</td> <td>itakura, sakoechiba</td> <td>Window constraint</td> </tr> <tr> <td>windowSize</td> <td>1, 5, 10, 30, 50, 100</td> <td>Window size</td> </tr> <tr> <td>stepPattern</td> <td>symmetric1, symmetric2, asymmetric</td> <td>Step pattern</td> </tr> </tbody> </table> ``` Table 1: List of parameters tuned for optimizing web page identification We systematically identify the optimal parameter configuration for each event loop on each machine. To avoid overfitting, we divide our dataset of 30 traces (per page, loop, and machine) into 15 traces for tuning and 15 for cross-validation. For each parameter configuration we perform a lightweight version (with 3 rounds) of the evaluation phase described in Section 4.1.6. Figure 7 visualizes an extract of the results we obtain for the renderer process of the Linux (1) machine. The tuning phase yields the following insights: - The optimal parameters depend on the loop but appear to be stable across machines. - Measuring the loading phase during 2 seconds is sufficient for recognition of a webpage; the gain in recognition from using longer traces is negligible. - P and windowSize are the parameters with the biggest impact on the recognition rate. However, they also have the biggest impact on the computational cost (the optimal choice being most expensive one). - The combination of stepPattern = symmetric1 and windowType = sakoechiba generally yields the best results. ### 4.1.6 Experimental Results We evaluate the performance of page identification through the shared event loops of host and renderer processes on each individual machine, as well as through the renderer process across two different machines. To this end, we select the top configuration for each corpus from the tuning phase and carry out a 10-fold cross-validation. In each of the 10 rounds, we partition the validation set into a training set that contains one trace of each page, and a testing set that contains three different (out of the 14 available) traces of each page. For each of the traces in the testing set, we compute the set of k closest matches in the training set according to the time warping distance. We measure performance in terms of the k-match rate, which is the percentage of pages in the testing set for which the true match is within the set of k closest matches. We abbreviate the 1-match rate by recognition rate, i.e., the percentage of pages where the best match is the correct one. The result of the cross-validation is the average k-match rate over all 10 rounds. Table 2 summarizes our experiments. We highlight the following results: ```markdown <table> <thead> <tr> <th>k</th> <th>1</th> <th>3</th> <th>5</th> <th>10</th> </tr> </thead> <tbody> <tr> <td>(1) Renderer</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>76.7</td> <td>86.7</td> <td>88.8</td> <td>91.1</td> <td></td> </tr> <tr> <td>sym1.sakoe, P = 5, windowSize = 100</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>(2) Renderer</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>58.2</td> <td>68.6</td> <td>71.8</td> <td>78.1</td> <td></td> </tr> <tr> <td>sym1.sakoe, P = 5, windowSize = 100</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>(2) I/O host</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>16.2</td> <td>23.2</td> <td>27.9</td> <td>36.1</td> <td></td> </tr> <tr> <td>sym1.sakoe, P = 20, windowSize = 30</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>(3) Renderer</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>61.8</td> <td>74.5</td> <td>78.4</td> <td>83.1</td> <td></td> </tr> <tr> <td>sym1.sakoe, P = 5, windowSize = 100</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>(3) I/O host</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>23.48</td> <td>32.9</td> <td>38.1</td> <td>46.6</td> <td></td> </tr> <tr> <td>sym1.sakoe, P = 20, windowSize = 30</td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ``` Table 2: 10-fold cross-validation results on different machines and different event loops, with the best configuration after tuning. Machines (1) and (2) refer to the Linux desktops, (3) to the OSX laptop, as described in Section 4.1.2. - We can correctly identify a page by spying on the renderer from (1) in up to 76.7% of the cases, and cor- rectly narrow down to a set of 10 candidates in up to 91.1% of the cases. - We can correctly identify a page though the host process from (3) in up to 23.48% of the cases, and narrow down to a set of 10 candidates in up to 46.6% of the cases. - We stress that these recognition rates are obtained using a single trace for training. - Recognition is easier through the renderer than through the host. This is explained by the difference in noise and measurement resolution, see Section 3.2.3. Furthermore, most operations on the host only block the I/O thread while signaling their start and completion, whereas the renderer is blocked during the entire execution of each Javascript task. - We observe different recognition rates on different machines. However the homogeneity in hardware and software of Macbooks facilitate reuse of training data across machines, which may make remote page identification more feasible. - We obtain recognition rates below 5% for recognition across machines (1) and (3). A reason for this poor performance is that events on the OSX laptop often take 2x-5x more time than on the Linux desktop machine. This difference is reflected in the height of the peaks (rather than in their position), which is penalized by DTW. Normalizing the measurements could improve cross-machine recognition. The code and datasets used for tuning and cross-validation are available as an R library at https://github.com/cgvwzq/rlang-loophole. 4.1.7 Threats to Validity We perform our experiments in a closed-world scenario with only 2 tabs (the spy and the victim) sharing an event loop. In real world scenarios there can be more pages concurrently running the browser, which will make detection harder. The worst case for monitoring the host process occurs when a tab performs streaming, since the loop gets completely flooded. The renderer’s loop, however, is in general more robust to noise caused by other tabs in the browser. On the other hand, our attacks do not make any use of the pages’ source code or of details of Chrome’s scheduling system with priority queues, the GC with periodic scavenges, or the frame rendering tasks. We believe that taking into account this information can significantly improve an adversary’s eavesdropping capabilities and enable attacks even in noisy, open-world scenarios. 4.2 Detecting User Behavior In this section we show that it is possible to detect user actions performed in a cross-origin tab or iframe, when the renderer process is shared. We first describe an attack recovering the inter-keystroke timing information against Google’s OAuth login forms, which provides higher precision than existing network-based attacks [32]. 4.2.1 Inter-keystroke Timing Attack on Google’s OAuth login form Many web applications use the OAuth protocol for user authentication. OAuth allows users to login using their identity with trusted providers, such as Google, Facebook, Twitter, or Github. On the browser, this process is commonly implemented as follows: 1. A web application $A$ pops up the login form of a trusted provider $T$; 2. User $V$ types their (name and) password and submits the form to $T$; 3. $T$ generates an authorization token. Because the window of the login form shares the event loop with the opener’s renderer, a malicious $A$ can eavesdrop on the keystroke events issued by the login form. Figure 8: Delay pattern generated by a keystroke in the Google OAuth login form, measured across origins on Chrome Canary v61 on OSX. The two consecutive delays of approx. 2ms each, correspond to keydown and keypress event listeners. Figure 8 depicts the event-delay trace of a keystroke as seen by an eavesdropper on the renderer’s event loop. The trace contains two characteristic consecutive delays of approx. 2ms each, correspond to keydown and keypress event listeners. Figure 8 depicts the event-delay trace of a keystroke as seen by an eavesdropper on the renderer’s event loop. The trace contains two characteristic consecutive delays of approx. 2ms each, correspond to keydown and keypress event listeners. We use this observation to identify keystrokes, by scanning the event-delay trace for pairs of consecutive delays that are within a pre-defined range, forgoing any training or offline work. Listing 4 contains the script that performs this operation. We define 0.4 ms as a lower bound, and 3.0 ms as an upper bound for the range. We chose this threshold before gathering the data, by manual inspection of a few keystroke events. Note that this calibration could be done automatically, based on the victim’s interactions with a page controlled by an attacker. ```javascript const L = 0.4, U = 3.0, keys = [] for (let i = 1; i < trace.length - 1; i++) { let d1 = trace[i] - trace[i - 1], d2 = trace[i + 1] - trace[i] if (L < d1 < U && L < d2 < U) { keys.push(trace[i]) } } ``` Listing 4: Pseudo-Javascript code to detect keystrokes in a trace of timestamps gathered by the code in Listing 1. We classify a timestamp as a keystroke if the differences to the previous and subsequent timestamps ($d_1$ and $d_2$) are both in a predefined range. ### 4.2.2 Experimental Evaluation To evaluate the effectiveness of this attack, we have implemented a malicious application $A$ that extracts the inter-keystroke timing information from a user $V$ logging-in via Google’s OAuth. The focus of our evaluation is to determine the accuracy with which keystroke timings can be measured through the event loop. A full keystroke recovery attack is out of scope of this paper; for this refer to [32]. ![Experimental setup for evaluating effectiveness of automatic, cross-renderer keystroke detection.](image) We simulate an inter-keystroke timing attack in 4 steps, which are described below and illustrated in Figure 9: 1. A Selenium script acting as $V$ navigates to $A$, clicks on the login button (which pops up Google’s OAuth login form), types a password, and submits the form. 2. Meanwhile, the attacker $A$ monitors the main thread’s event loop using the attack described in Section 4.2.1. 3. $V$ and $A$ send to the server the timestamps of the real and the detected keystrokes, respectively. 4. We compute the accuracy of the detected keystrokes, where we take the timestamps of the real keystrokes as ground truth. Matching the timestamps requires taking into account the delay ($6 - 12$ ms on our machine) between Selenium triggering an event, and Chrome receiving it. We use as inter-keystroke timings random delays uniformly drawn from $100 - 300$ ms. This choice is inspired by [20], who report on an average inter-keystroke delay of $208$ ms. Using random delays is sufficient for evaluating the accuracy of eavesdropping on keystrokes, but it obviously does not reveal any information about the password besides its length. ### 4.2.3 Experimental Results We perform experiments with 10,000 passwords extracted from the RockYou dataset, where we obtain the following results: - In $91.5\%$ of the cases, our attack correctly identifies the length of a password. In $2.2\%$ of the cases, the attack misses one or more characters, and in $6.3\%$ of the cases it reports spurious characters. - For the passwords whose length was correctly identified, the average time difference between a true keystroke and a detected keystroke event is $6.3$ ms, which we attribute mostly to the influence of Selenium. This influence cancels out when we compute the average difference between a true inter-keystroke delay and a detected inter-keystroke delay, which amounts to $1.4$ ms. The noise of these measurements is low: We observe a standard deviation of $6.1$ ms, whereas the authors of [20] report on $48.1$ ms for their network based measurements. Overall, our results demonstrate that shared event loops in Chrome enable much more precise recovery of keystroke timings than network-based attacks. Moreover, this scenario facilitates to identify the time when keystroke events enter the loop (from popping-up to form submission), which is considered to be a major obstacle for inter-keystroke timing attacks on network traffic [20]. Keystroke timing attacks based on monitoring `procfs` [38] or CPU caches [18] can extract more fine-grained information about keystrokes, such as containment in a specific subsets of keys. However, they require filesystem access or are more susceptible to noise, due to the resource being shared among all processes in the system. In contrast, our attack enables targeted eavesdropping without specific privileges. --- 4. We configured Selenium to atomically inject characters that would require multiple keys to be pressed. Detecting User Events beyond Keystrokes A continuous mouse movement results in a sequence of events, each of which carrying information about the coordinates of the cursor’s trajectory. These events are issued with an inter-event delay of 8 ms, and the (empty) event listener operation blocks the loop for approx 0.1 ms. The particular frequency and duration of these events makes mouse movements (or similar actions, like scrolling) easy to spot with LoopScan, as seen in Figure 10. Likewise, mouse click events, corresponding to “up” or “down”, can be identified using LoopScan. Their shape depends on the specific event listener of the spied web page and the HTML element being clicked. We expect that events with specific listeners are more easily detectable than events without registered event listeners, that is, user actions that do not trigger Javascript execution. However, we can use the context in which the event occurs to reduce the search space. For instance, most mouse clicks only appear between two sequences of mouse movement events. We are currently investigating techniques that enable the automatic identification of such patterns in event-delay streams. A promising starting point for this are existing on-line variants of dynamic time-warping [31]. Detecting User Events in the Host Process Our discussion so far has centered on detecting user events in the event loop of the renderer process. However, all user events originate in the main thread of the host process and are sent towards a specific renderer through the event loop of the host’s I/O thread. Hence, any user action can in principle be detected by spying on the host. Unfortunately, our current methods are not precise enough for this task, since the host’s I/O thread is more noisy than the renderer’s main thread and the effect of a user action on the host process is limited to a short signaling message, whereas the renderer’s main thread is affected by the execution of the corresponding Javascript event listener. 4.3 Covert Channel In this section we show how shared event loops in Chrome can be abused for implementing covert channels, i.e. channels for illicit communication across origins. We first consider the case of cross-origin pages sharing the event loop of a renderer’s main thread before we turn to the case of cross-origin pages sharing the event loop of the host processes’ I/O thread. 4.3.1 Renderer Process We implement a communication channel to transmit messages from a sender page $S$ to a cross-origin receiver page $R$ running in the same renderer process. For this, we use a simple, unidirectional transmission scheme without error correction. Specifically, we encode each bit using a time interval of fixed duration $t_b$. The optimal configuration of $t_b$ depends on the system. In our experiments we tried different values, with $t_b = 5$ ms giving good results on different platforms: Chromium 52.0 on Debian 64-bit and Chrome 53 on OSX. In each of those intervals we do the following: - the sender $S$ idles for transmitting a 0; it executes a blocking task of duration $\tilde{t} < t_b$ for transmitting a 1. - the receiver $R$ monitors the event loop of the renderer’s main thread using the techniques described in Section 3.1; it decodes a 0 if the length of the observed tasks is below a threshold (related to $\tilde{t}$), and a 1 otherwise. Transmission starts with $S$ sending a 1, which is used by the agents to synchronize their clocks and start counting time intervals. Transmission ends with $S$ sending a null byte. With this basic scheme we achieve rates of 200 bit/s. These numbers can likely be significantly improved by using more sophisticated coding schemes with error correction mechanisms; here, we are only interested in the proof-of-concept. We note that there are a number of alternative covert channels for transmitting information between pages running in the same renderer [11], e.g., using `window.name`, `location.hash`, `history.length`, scrollbar’s position or `window.frames.length`. What distinguishes the event-loop based channel is that it does not require the sender and receiver to be connected, i.e. they do not need to hold references to each other in order to communicate. 4.3.2 Host Process We also implement a communication channel to transmit messages between two cooperative renderer processes... sharing the host process. Transmission is unidirectional from sender $S$ to receiver $R$. Figure 11 visualizes how this channel can be used, even if one of the parties browses in Incognito mode. ![Figure 11: Covert channel through the I/O event loop of the Chrome’s host process. Tabs in different renderer processes (one of them navigating in Incognito mode) communicate.](image) As before, we encode each bit using a time intervals of fixed duration $t_b$. During each intervals we do the following: - the sender $S$ idles for transmitting a 0; it posts $N$ fetch requests into the I/O thread’s queue for sending a 1. - the receiver $R$ monitors the event loop of the I/O thread of the host process using the techniques described in Section 3.2. It decodes a 0 if the number of observed events during time interval $t_b$ is below a threshold, and 1 otherwise. The optimal values of $N$ and $t_b$ highly depend on the machine. In our experiments we achieve good results, working on different systems, with a $t_b = 200$ ms and $N = 350$, which give us a 5 bit/s transmission rate. This rate is significantly lower than for communication using the renderer event loop, which is explained by the difference in noise and monitoring resolution of both channels, as discussed in Section 3.2.3. The threat scenario of this covert channel is more relevant than the previous one for the renderer loop. For example it could be used for exfiltrating information from an attacked domain (on a tab executing malicious Javascript). Using Workers (which are background threads that run independently of the user interface) we can transfer information across origins, without affecting the user experience and without generating network traffic. 5 Discussion We have shown how sharing event loops leads to timing side-channels and presented different attacks on Chrome. We communicated our findings to the Chromium security team, who decided not to take action for the time being. Nevertheless, our results point to fundamental security issues in the event-driven architecture of browsers that eventually need to be addressed in a fundamental manner. Below, we discuss how other platforms are affected and present possible countermeasures. 5.1 Beyond Chrome We focus on Chrome in our analysis because it is the most widely used browser, and because it was the first one to implement a multi-process architecture. However, there are good reasons to expect similar side channels in other browsers, as they all follow the same event-driven paradigm and rely on similar architectures. For instance, recent Firefox versions with multi-process support also rely on a privileged browser process and multiple content processes that, unlike renderers in Chrome, act as a pool of threads for each different origin (each with its own message queue). Despite this difference, tests with LoopScan on Firefox version 55 show that congestion on both event loops is observable across origins and tabs. Specifically, we applied the monitoring technique for the renderers described in Section 3.1.2 on a microbenchmark with a set of 30 pages with 15 traces each. We achieved a recognition rate of 49%, which is below the recognition rate achieved on Chrome for a set of 500 pages. A fair comparison between both architectures will require a better understanding of Firefox’s policy for mapping sites to threads and events to loops. 5.2 Countermeasures The attacks presented in this paper rely on two capabilities of the adversary: (1) the ability to post tasks into the loop’s queue with high frequency, and (2) the ability to accurately measure the corresponding time differences. Rate Limiting. An obvious approach to counter (1) is to impose a limit on the rate at which tasks can be posted into an event loop. Unfortunately, rate limiting implies penalties on performance, which is especially problematic for asynchronous code. At the level of the renderer, one possibility is to rely on an accumulate and serve policy \[22\]. With this policy, the event loop accumulates all the incoming jobs in a buffer for a period $T$, and then process and serves all the accumulated jobs from party $A$, followed by all the jobs from $V$. This has the advantage of limiting the amount of information leaked while retaining high amortized throughput. At the level of the host process, where resource fetching is one of the main performance concerns, setting any bound on the processing rate is not acceptable. Here, it seems more reasonable to monitor the IPC activity of all renderers and penalize or flag those who exhibit a bad or anomalous behavior, e.g., along the lines of [39]. **Reduce Clock Resolution.** An obvious approach to counter (2) is to limit the resolution of available clocks. This has already been applied by browser vendors for mitigating other kinds timing channels, but these efforts are unlikely to succeed, as shown in [23]. Modern browsers have a considerable number of methods to measure time without any explicit clock. For instance, some recent exploits [16] use high-resolution timers build on top of SharedArrayBuffers. The current resolution of `performance.now` is limited to 5 µs, which makes microarchitectural timing attacks difficult, but does not preclude the detection of Javascript events. **Full Isolation.** As discussed in Section 2.2 Chrome’s multi-process architecture tries to use a different renderer for different origins, except for some corner cases. The “Site Isolation Project” is an ongoing effort to ensure a complete process-per-site-instance policy, that means: providing cross-process navigations, cross-process Javascript interactions and out-of-process iframes. All this without inducing too much overhead. One open question is how to handle the system’s process limit, namely which sites should have isolation preference, or which heuristic for process reuse should be used. A recent proposal, “IsolateMe” [4], puts the developers in charge of requesting to be isolated from other web content (even if it does not provide a firm guarantee). **CPU Throttling.** Chrome v55 introduces an API that allows to limit how much CPU a background page is allowed to use, and to throttle tasks when they exceed this limit. This affects background tabs trying to spy on the renderer’s main thread, but still allows spying on (and from) any iframe and popup, as well as on the I/O thread of the host process through shared Workers. Moreover, background tabs with audio activity are not affected, as they are always marked as foreground. Since Chrome v57 pages (or tabs) are only subjected to throttling after 10 seconds in the background, which is too long to prevent the attacks in this paper. 6 Related Work Timing attacks on web browsers date back to Felten and Schneider [13], who use the browser cache to obtain information about a user’s browsing history. More recently, so-called cross-site timing attacks [10] [35] have exploited the fact that the browser attaches cookies to all requests, even when they are performed across origins. The presence or absence of these cookies can be determined by timing measurements, which reveals information about the user’s state on arbitrary sites. A special case are cross-site search attacks [14], which circumvent the same-origin policy to extract sensitive information, by measuring the time it takes for the browser to receive responses to search queries. Other classes of browser-based timing attacks exploit timing differences in rendering operations [24] [33] [5], or simply use the browser as an entry point for Javascript that exploits timing channels of underlying hardware, for example caches [26] [16]. DRAM buffers [17], or CPU contention [9]. Of those approaches, [9] is related to our work in that it identifies web pages across browser tabs, based on timing of Javascript and a classifier using dynamic time warping. However, because the attack relies on CPU contention as a channel, it requires putting heavy load on all cores for monitoring. In contrast, our attack exploits the browser’s event loop as a channel, which can be monitored by enqueuing one event at a time. This makes our attack stealthy and more independent of the execution platform. To the best of our knowledge, we are first to mount side-channel attacks that exploit the event-driven architecture of web browsers. Our work is inspired by a proof-of-concept attack [36] that steals a secret from a cross-origin web application by using the single-threadedness of Javascript. We identify Chrome’s event-driven architecture as the root cause of this attack, and we show how this observation generalizes, in three different attacks against two different event loops in Chrome. Finally, a central difference between classical site fingerprinting [28] [19] [34] [12] approaches and our page identification attack is the adversary model: First, our adversary only requires its page to be opened in the victim’s browser. Second, instead of traffic patterns in the victim’s network, our adversary observes only time delays in the event queues of the victim’s browser. We believe that our preliminary results, with up to 76% of recognition rate using one single sample for training in a closed-world with 500 pages, can be significantly improved by developing domain-specific classification techniques. 7 Conclusions In this paper we demonstrate that shared event loops in Chrome are vulnerable to side-channel attacks, where a spy process monitors the loop usage pattern of other processes by enqueuing tasks and measuring the time it takes for them to be dispatched. We systematically study how this channel can be used for different purposes, such as web page identification, user behavior detection, and covert communication. Acknowledgments We thank Thorsten Holz, Andreas Rossberg, Carmela Troncoso, and the anonymous reviewers for their helpful comments. We thank Javier Prieto for his help with the data analysis. This work was supported by Ramón y Cajal grant RYC-2014-16766, Spanish projects TIN2012-39391-C04-01 StrongSoft and TIN2015-70713-R DEDETIS, and Madrid regional project S2013/ICE-2731 N-GREENS. References
{"Source-Url": "http://software.imdea.org/~bkoepf/papers/loophole.pdf", "len_cl100k_base": 13400, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 53244, "total-output-tokens": 16186, "length": "2e13", "weborganizer": {"__label__adult": 0.000431060791015625, "__label__art_design": 0.000698089599609375, "__label__crime_law": 0.0014925003051757812, "__label__education_jobs": 0.0006709098815917969, "__label__entertainment": 0.00018334388732910156, "__label__fashion_beauty": 0.00017523765563964844, "__label__finance_business": 0.00023603439331054688, "__label__food_dining": 0.0003192424774169922, "__label__games": 0.0009679794311523438, "__label__hardware": 0.0022525787353515625, "__label__health": 0.0004200935363769531, "__label__history": 0.00034618377685546875, "__label__home_hobbies": 0.00010025501251220704, "__label__industrial": 0.0004546642303466797, "__label__literature": 0.00030303001403808594, "__label__politics": 0.0003859996795654297, "__label__religion": 0.00039839744567871094, "__label__science_tech": 0.10491943359375, "__label__social_life": 0.00010222196578979492, "__label__software": 0.04620361328125, "__label__software_dev": 0.83837890625, "__label__sports_fitness": 0.00021851062774658203, "__label__transportation": 0.00034117698669433594, "__label__travel": 0.0001800060272216797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66100, 0.03538]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66100, 0.17644]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66100, 0.89761]], "google_gemma-3-12b-it_contains_pii": [[0, 3444, false], [3444, 6968, null], [6968, 11529, null], [11529, 15974, null], [15974, 20651, null], [20651, 25020, null], [25020, 30021, null], [30021, 33114, null], [33114, 37132, null], [37132, 41787, null], [41787, 46048, null], [46048, 50429, null], [50429, 54510, null], [54510, 59791, null], [59791, 65399, null], [65399, 66100, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3444, true], [3444, 6968, null], [6968, 11529, null], [11529, 15974, null], [15974, 20651, null], [20651, 25020, null], [25020, 30021, null], [30021, 33114, null], [33114, 37132, null], [37132, 41787, null], [41787, 46048, null], [46048, 50429, null], [50429, 54510, null], [54510, 59791, null], [59791, 65399, null], [65399, 66100, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66100, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66100, null]], "pdf_page_numbers": [[0, 3444, 1], [3444, 6968, 2], [6968, 11529, 3], [11529, 15974, 4], [15974, 20651, 5], [20651, 25020, 6], [25020, 30021, 7], [30021, 33114, 8], [33114, 37132, 9], [37132, 41787, 10], [41787, 46048, 11], [46048, 50429, 12], [50429, 54510, 13], [54510, 59791, 14], [59791, 65399, 15], [65399, 66100, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66100, 0.06838]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
d9fb99dd308156bf95e6b3cf24a37754055096df
ABSTRACT Software defined networks provide new opportunities for automating the process of network debugging. Many tools have been developed to verify the correctness of network configurations on the control plane. However, due to software bugs and hardware faults of switches, the correctness of control plane may not readily translate into that of data plane. To bridge this gap, we present VeriDP, which can monitor "whether actual forwarding behaviors are complying with network configurations". Given that policies are well-configured, operators can leverage VeriDP to monitor the correctness of the network data plane. In a nutshell, VeriDP lets switches tag packets that they forward, and report tags together with header-tos to the verification server before the packets leave the network. The verification server pre-computes all header-to-tag mappings based on the configuration, and checks whether the reported tags agree with the mappings. We prototype VeriDP with both software and hardware OpenFlow switches, and use emulation to show that VeriDP can detect common data plane fault including black holes and access violations, with a minimal impact on the data plane. 1. INTRODUCTION In traditional networks, when a fault (e.g., routing black hole) occurs in the network, it will be firstly noticed by some end hosts that may become unreachable. Then, customers complain and issue tickets to the network operators, who use simple tools like ping and traceroute to localize the fault and resolve it. The above process lacks automation, and inevitably incurs a long service downtime. Since networks know every single detail of a packet’s lifetime, why not let themselves raise alters to operators, instead of end hosts or customers? There are many potential benefits by letting networks take an active role in network monitoring and debugging. First, by automatically raising alters, operators can resolve the faults more efficiently, thereby reducing the network downtime. Second, some faults (e.g., access violation) that may not be explicitly noticed by any end hosts can be captured by networks. Finally, networks can provide more useful information for the operators to pinpoint the fault location. One reason that networks keep passive in monitoring and debugging may be the distributed nature of networks: no single switch can reason about the global policies in traditional networks. For example, consider a packet is dropped, the switch does not know whether it is due to faults or access control policies. As another example, if a packet is received twice by a switch, it is a fault of loop in most cases, while it also can happen with a normal policy in flexible middle-box traversal scenarios [8]. We observe that networks have the potential to take a more active role in the monitoring in the context of SDN. First, the SDN controller knows the global network policy, i.e., how the data plane should behave, and there are many tools to guarantee the correctness of network policy, either off-line [19, 15, 26] or on-the-fly [16, 14, 24]. Secondly, switches can record and report packet forwarding behaviors to the controller, through standard south bound interfaces (e.g., OpenFlow [20]). By comparing the packet forwarding behaviors to the global network policy, the controller is in a good position to detect faults of the data plane. Previous efforts on automatic network debugging are mostly focused on checking correctness of network configurations [19, 15, 16, 14, 24, 26]. However, even the controller and configurations are correct, the data plane may still experience faults due to switch software bugs [17], hardware failures [25], or malicious attacks [22]. Existing data plane verification tools either solely check reachability and thus miss path information [25], use probe packets that can incur too much data plane traffic [9, 10]. Noted of the above limitations, we propose VeriDP, a new tool that can monitor the policy compliance of SDN data plane. In contrast to path tracers that solely rely on switches to imprint packet paths, VeriDP combines it with the network policies/configurations on the control plane. This combination delivers the following benefits. (1) With the network policy at hand, VeriDP can distinguish packet drops due to access violation from black holes, and strategic multi-traversals [8] from infinite loops. (2) It is not necessary to optimize the path encoding method so as to fit the path info into the limited header space as in [27, 21, 23], since the controller already knows the correct path, and the only task is to judge whether the path taken by packet is the same with it. The basic idea of VeriDP is quite simple. The controller pre-compute a path table which records all mappings from packet headers to forwarding paths. When a packet enters the network, the entry switch decides whether to mark it according to some sampling strategies. If a packet is marked, each switch en route tags it with the forwarding information. Before a marked packet leaves the network, the exit switch reports its header and tag to the controller. The controller verifies whether the information encoded in the tag... is the same with the path that the packet should take according to the path table. Indeed, packet tagging is not a new idea for tracing packet trajectories [27, 21, 23]. VeriDP differs from them in that it is not trying to encode path information into packet headers for the receivers to decode it. Rather, VeriDP uses control-plane policies to “infer” paths, and use packet tags to “test” the correctness of real paths. The advantage is that VeriDP does not need complicated encoding methods or a large number of flow entries, in order to compress path info into limited header space. Our contribution is two-fold: - We propose VeriDP, a new tool to monitor the policy compliance of SDN data plane, i.e., “whether packet forwarding behaviors agree with the policy configured by the controller”. - We implement VeriDP on software- and hardware-based data plane, and demonstrate that it can detect common faults like black holes, access violation, loops, while incurring minimal overhead on the data plane. In the rest of this paper, we will first give our notion of policy compliance (§2), and then present the design of VeriDP (§3). We continue to test the function of VeriDP, and evaluate its performance (§4). After discussing some related work (§5), we conclude the paper (§6). 2. INTRODUCING POLICY COMPLIANCE 2.1 Observations Before introducing our notion of policy compliance, we first elaborate the fact that verifying the policy compliance of data plane necessitates checking packet paths with real traffic. Path Check Is Important. Pairwise reachability is a key invariant for a network. However, only checking reachability is not enough to reveal data plane faults. Consider an example of middlebox traversal. As shown in Figure 1, rules at switch S1 indicate that traffic from the client H1 to the server H2 must go through the middlebox. To test all these rules based on reachability, we can send two probe packets H1 → H2 and H2 → H1. They will follow the path of H1(H2) → S1 → M → S1 → H2(H1), respectively, and thus trigger all rules in this network. Now, consider that the high-priority rules R1 and/or R2 fail, then the probe packets will take the path of H1(H2) → S1 → H2(H1) instead. However, probe packets will still be received as normal, thus missing the faults. This example shows that to monitor the policy compliance of data plane, we need to check the paths of packets, instead of only checking pairwise reachability. Real Packets Are Necessary. Verification using probe packets can only verify that the forwarding paths of probe packets agree with the rule. It does not necessarily mean the forwarding paths of real traffic do. For example, consider an ACL rule that only permits HTTP traffic from IP address 10.0.0.1: match: src_ip = 10.0.0.1, dst_port = 80, action="ALLOW" A probe packet with source address 10.0.0.1 and destination port 80 can trigger this rule. However, even the packet is successfully received, it may not mean the rule is correctly configured at the switch. For example, consider the above rule is prioritized by an ill-inserted rule: match: src_ip = 10.0.0.1, dst_port = *, action="ALLOW" The probe packet can still be received. However, Non-HTTP traffic, e.g., SSH, from 10.0.0.1 will also be allowed, violating the controller’s policy. This is because the probe packets cannot exhaust all possibilities in the header space in order to detect such ill-inserted rules. Thus, to monitor the policy compliance of data plane, we still need to inspect the real packets. 2.2 Policy Compliance Model Notations. A port p is defined as a pair (SwitchID, PortID), where PortID ∈ {1, 2, 3, . . . , n}, is the local port ID, and represents the dropping port. A header h is defined as a point in the H = {0, 1}^4 space. A header set H is defined as a subset h ⊂ H. A flow f is defined as a pair (h, p), where h is the header of the flow, and p is the port where the flow enters the network. A rule r is defined as a tuple (p_1, H, p_2), meaning that packets received from port p_1 with header h ∈ H should be forwarded to port p_2. For a dropping rule, p_2 = (SwitchID, ↓). A link can be seen as a special kind of rule (p_1, H, p_2), meaning that packets forwarded to port p_1 of one switch will be received at port p_2 of another switch. Packet Path. When a packet pkt of flow f is received at port p^in in of S1, S1 looks up in its flow table. When the first rule r = (p^in, H, p^out) satisfying h ∈ H is found, S1 forwards pkt to port p^out, and applies the link rules so that pkt is received at port p^out of another switch S2. This process continues until pkt reaches an output port p^out of switch S_n, such that either p^out is connected to an end host, or it is a dropping port. The path of flow f is defined as the sequence of traversed ports, i.e., Path(R, f) = (p_1^in, p_1^out, p_2^in, . . . , p_n^out). Let R be the set of all rules in the network (including the link rules), and R' be the counterpart that is actually enforced by switches. The policy compliance of data plane is defined as follows. Definition 1. The data plane is said to be policy compliant iff Path(R', f) = Path(R, f) for every flow f in the network, where R and R' are the sets of rules configured by the controller and enforced by switches, respectively. VeriDP is aimed to verify the policy compliance of the SDN data plane according to the above definition. Note that there are cases that R' ≠ R but Path(R', f) = Path(R, f) for all f. We do not consider it as a fault since the forwarding behaviors remain the same. Specifically, Definition 1 allows us to detect common faults on the data plane, including black holes, access violation, and loops: Black Holes. In this case, there exists a flow f, such that Path(R', f) = (p_1^in, p_1^out, p_2^in, . . . , p_n^in, ↓), meaning that the flow is dropped by a switch that receives f from port p^out. Suppose f is destined to an host connected to port p^out, then we should have Path(R, f) = (p_1^in, p_1^out, p_2^in, . . . , p_n^in, p^out) ≠ Path(R, f). Access Violation. In this case, there exists a flow f, such that Path(R', f) = (p_1^in, p_1^out, p_2^in, . . . , p_m^in, p^out), where p^out is connected to an end host that f is forbidden to reach. Then, we should have Path(R, f) = (p_1^in, p_1^out, p_2^in, . . . , p_m^in, ↓), where the switch with p^out should drop f. Obviously, Path(R', f) ≠ Path(R, f). Loops. In this case, there exists a flow f, such that the length of Path(R, f) exceeds the maximum TTL, say MaxTTL. On the other hands, the length of Path(R, f) should be less than MaxTTL, and thus Path(R’, f) ≠ Path(R, f). 3. DESIGN As shown in Figure 2, VeriDP consists of two major components: the VeriDP pipeline on the data path, the VeriDP server on the control plane. The pipeline is responsible for sampling, tagging, reporting packets to the VeriDP server. The server intercepts the bidirectional OpenFlow messages exchanged between the controller and switches, in order to construct the path table, which records all header-to-tag mappings. With the path table, the server verifies reported packets sent from switches. The dashed rectangle represents the domain that VeriDP monitors, i.e., VeriDP is expected to detect the faults caused by the components inside the domain. The monitor domain includes: (1) the OpenFlow agent that terminates the OpenFlow channel, and (2) the OpenFlow pipeline that manages the hardware flow table and forwards packets through table lookups. 3.1 VeriDP Pipeline The VeriDP pipeline is responsible for generating tags for packets at entry switches, updating tags for packets at core switches, and reporting packet headers and tags to the controller at exit switches. The VeriDP pipeline is implemented in a switch’s fast path, separated from the OpenFlow pipeline. The reason is avoid faults caused by OF flow tables to propagate into the tagging module. Since a typical switch can contain a cascade of flow tables, each of which may hold thousands of flow entries, flow entries used for tagging may be override by other rules, replaced when flow table is full, and even incorrectly modified/deleted by applications. The VeriDP pipeline processing is shown in Algorithm 1. The entry switch initializes the packet tag to zero, and the ttl to the maximum path length (Line 1-3). Each switch updates the tag as: \[ \text{tag} = \text{tag} \oplus \text{hash}(\text{inport}||\text{switchID}||\text{output}) \] (1) , and decrements the ttl value by one (Line 4-5). When the packet is output to an edge port connected with an end host, output to the dropping port ⊥, or its ttl hits zero, the switch sends a tag report to the server (Line 6-7). Here, a tag report is a 4-tuple \( (\text{inport}, \text{outport}, \text{header}, \text{tag}) \), where inport/outport are the entry/exit port of the packet; header is a portion of packet header (e.g., TCP 5-tuple); tag is the tag of the packet. One thing to note is that switches should send tag reports for dropped and looped packets. This is necessary to ensure the visibility of verification server into black holes and loops. 3.2 VeriDP Server The VeriDP server is responsible for parsing and verifying tag reports sent by switches. Central to the VeriDP server is the path table, which maps a pair of \( (\text{inport}, \text{outport}) \) to a list of paths that enter the network at inport and exit at outport. Each path is a pair of \( (\text{headers}, \text{tag}) \), where headers is a set of headers allowed for the path, and tag is the tag represents the path. For a concrete example, consider the toy network in Figure 3. Rule 3 redirects all SSH traffic to S2, and Rule 4 forwards all other packets towards 10.0.2.24 to S3. Rule 5 directs all traffic from port 1 to the middlebox. Rule 8 at switch S3 drops all traffic from H2. Other rules are plain forwarding rules ensuring connectivity. Table 1 is a part of the path table for this topology. 3.2.1 Representing the Header Set A problem for constructing path table is how to represent header sets. A straightforward way is to use wildcard expressions, just as in Header Space Analysis [15] and ATPG [25]. However, wildcard expressions are suitable for representing suffix, while very inefficient for representing arbitrary header set. For example, the header set for \( \text{dst}_\text{port} \neq 22 \) in the second row of Table 1 is a union of 16 wildcard expressions. In addition, wildcard expressions have a poor support of set operation like union, conjunction, and complement. For a typical network of tens of switches, each of which has thousands of flow rules, a huge number of wildcard expressions are needed to represent all the possible packet sets. According to [13], characterizing the Stanford backbone network (16 switches) needs 652 million wildcard expressions. Inspired by the previous work [24], we decide to use the Binary Decision Diagrams (BDDs) [7] to represent header sets. BDD is an efficient data structure for Boolean expressions, and has a better Algorithm 1: Tag(S, x, y, p) Input: S: the switch ID; x/y: the local input/output port ID of packet p, which is received from the OpenFlow pipeline. \[ \begin{align*} 1 & \text{if } (\text{S}, x) \text{ is an edge port then} \\ 2 & \quad \text{initialize the tag} \\ 3 & \quad \text{p.tag} \leftarrow 0; // \text{initialize the ttl} \\ 4 & \quad \text{p.ttl} \leftarrow \text{MAX_PATH_LENGTH}; // \text{initialize the ttl} \\ 5 & \quad \text{p.tag} \leftarrow \text{p.tag} \oplus \text{hash}(x||y); // \text{update the tag} \\ 6 & \quad \text{p.ttl} \leftarrow \text{p.ttl} - 1; // \text{decrement the ttl} \\ 7 & \text{if } (\text{S}, y) \text{ is an edge port or } y = \perp \text{ or } \text{p.ttl} = 0 \text{ then} \\ 7 & \quad \text{Report(inport, (S, y), p.header, p.tag); // send report} \end{align*} \] Figure 3: A simple example for path table construction. The network consists of three switches and a total of 10 rules. 14 Table 1: Part of the path table for Figure 3. [ ] represents the hash function. <table> <thead> <tr> <th>input</th> <th>output</th> <th>headers</th> <th>tag</th> </tr> </thead> <tbody> <tr> <td>(S1, 1)</td> <td>(S2, 2)</td> <td>src_ip = 10.0.1.1, dst_ip = 10.0.2.1, dst_port = 22</td> <td>[1][S1][3] ⊕ [1][S2][3] ⊕ [4][S2][2] ⊕ [1][S1][2]</td> </tr> <tr> <td>(S1, 2)</td> <td>(S2, 1)</td> <td>src_ip = 10.0.1.1, dst_ip = 10.0.2.1, dst_port = 22</td> <td>[1][S1][4] ⊕ [4][S2][2]</td> </tr> </tbody> </table> Algorithm 2: Traverse(import, (S, x), H, t) Input: import: the input port of the header; (S, x): the currently visited port; H: the header set; t: the tag 1 \( \hat{H} \leftarrow \emptyset \) // headers of dropped packets 2 \( H \leftarrow H \land P^A_x \) // ACL predicate of port x 3 foreach port y of switch S do 4 \( H = H \land P^B_y \) // FWD predicate of port y 5 if \( \hat{H} \neq \emptyset \) then 6 \( \hat{H} \leftarrow \hat{H} \land H \); \( t \leftarrow t \oplus \) hash(x)|S|y) // update the tag 7 if \( (S, y) \) is an edge port then 8 \( \left[ \begin{array}{l} \text{Insert}(import, (S, y), \hat{H}, t); \\ \text{Traverse}(import, \text{Link}((S, y)), \hat{H}, t); \end{array} \right] \\ \) \( t \leftarrow t \oplus \) hash(x)|S|y); \( \left[ \begin{array}{l} \text{Insert}(import, (S, y), \hat{H}, t); \\ \end{array} \right] \) 9 else 10 \( \left[ \begin{array}{l} \text{Insert}(import, (S, \perp), \hat{H}, t); \\ \end{array} \right] \) Algorithm 3: Verify(import, output, header, tag) Output: True (pass), or False (fail). 1 foreach p ∈ PathTable(import, output) do 2 if header < p.headers then 3 if tag = p.tag then 4 return True; // the path is correct 5 else 6 \( \left[ \begin{array}{l} \text{return False; // the packet should not reach here} \\ \end{array} \right] \) 7 return False; // the packet should not reach here support of set operations. With BDDs, we can expect to significantly reduce the size of path table. 3.2.2 Constructing the Path Table We show how to construct the path table from a configuration similar to the Stanford backbone network configuration [3]. For simplicity, we assume the configuration files have already been transformed into a set of predicates using the method in [24]. For each input port x, there is an ACL predicate \( P^A_x \), meaning that packets that satisfy \( P^A_x \) are allowed to input from port x. Similarly, for each output port y, there is an ACL predicate \( P^B_y \), meaning that packets that satisfy \( P^B_y \) are allowed to output to port y. Finally, each output port y also has a FWD (forwarding) predicate \( P^F_y \) which guard which packets will be forwarded to port y. Algorithm 2 summarizes the process of constructing path table from the above predicates. For each edge port connected with end hosts, we inject a header set \( H \) initialized to all-headers (i.e., a BDD of True), and a tag t initialized to zero. When the header \( H \) is received at a port \((S, x)\), the algorithm intersect \( H \) with the ACL predict of port \( x \) (Line 2), and then iteratively intersect the resultant header set \( H \) with the forwarding rules of all output ports (Line 3-4). For each port \( y \) that intersection is non-empty, the header set \( \hat{H} \) is intersected further with the ACL rules of \( y \) (Line 5-7). If \( \hat{H} \) is still non-empty, the algorithm updates the tag \( t \), and either inserts an entry \( S \) into \( \hat{H} \) or recursively calls the algorithm with the new header and tag (Line 6-14). If there are still headers that are not forwarded to any ports (recorded by \( \hat{H} \)), they would be dropped, and the algorithm updates the tag and inserts an entry (Line 15-16). 3.2.3 Verifying the Tags Algorithm 3 specifies the simple process of tag verification. On receiving a tag report \((import, output, header, tag)\), the server looks up in the path table with index \((import, output)\), and for each path \( p \), it tries to match \( header \) with the header set of path \( p \) (Line 1-2). If matched, \( tag \) is compared with the tag of path \( p \). The verification succeeds if these tags are equal (meaning that the packet followed the right path), or fails otherwise (Line 3-6). If no matched path is found (meaning that the packet should not have reached here), then the verification also fails (Line 7). Let us turn back to Figure 3, and assume \( H_1 \) sends a packet to port 22 of \( H_3 \). The packet should take the path of \( S_1 \rightarrow S_2 \rightarrow S_3 \), and the tag should be \([1][S_1][3] ⊕ [1][S_2][3] ⊕ [3][S_2][2] ⊕ [1][S_3][2] \). With \((S_1, 1), (S_2, 2)\) as the index, the server would find two paths: one for \( dst_port = 22 \) and the other for \( dst_port ≠ 22 \). The header of the packet would match the packet set of the first path. If the tag of the packet is the same with that of that path, the verification succeeds. Now consider that rule \( R3 \) fails. Then, the packet will take the path of \( S_1 → S_3 \), and the tag would be \([1][S_1][4] ⊕ [3][S_3][2] \), disagreeing with that of the path. 3.3 Sampling Tagging and verifying every packet in the network can incur a large overhead. This overhead can be made significantly smaller since packets of the same flow will very likely experience the same forwarding behaviors. In this paper, we use a simple method which samples packets based on flows at entry switches. Each flow \( f \) is associated with a parameter \( T'_f > 0 \), termed the sampling interval. The entry switch \( S \) of \( f \) maintains the last sampling instant \( t' \). For each packet received by \( S \) at time \( t \), if \( t − t' > T'_f \), \( S \) marks the packet and updates \( t' ← t \). 4. IMPLEMENTATION AND EVALUATION 4.1 Implementation Packet Format. VeriDP needs each data packet to carry three additional elements: marker, tag, and import. Here, marker is just 1 bit indicating whether the packet is sampled for verification or not; tag is the XORs of the lower 16 bits of hash output (currently we use CRC32); import is the input port of the packet: 10 bits for switch ID, and 6 bits for local port ID. Thus, VeriDP currently can support 1024 switches, each of which can have up to 63 ports (one reserved for drop port). We put the 1-bit marker into the IP TOS field, and use two VLAN tags\(^1\) to carry tag and inport. Finally, tag reports are sent to the verification server using UDP packets, each carrying four fields, i.e., inport, outport, header, tag. **VeriDP Server.** The server is responsible for constructing the path table based on network configuration, and searching the path table for tag verification. The path table construction is based on codes from [23], which iterates over all possible paths in the network to detect bugs (e.g., black holes, loops) in the configuration files. We modify the codes by computing the tag for each path using Eq(1) to construct the path table. In addition, we add a virtual dropping port to each switch, and compute paths that end at this dropping port (i.e., the header sets corresponding to headers that should be dropped by this switch). Before looking up in the path table, we first construct a BDD predicate from the header field in the tag report. To determine whether header \( \prec p\text{-headers} \) (Line 2 in Algorithm 3), we check whether the intersection of their BDD representation is not False. **VeriDP Pipeline.** The VeriDP pipeline is responsible for sampling and marking packets, updating tags for marked packets, and sending tag reports to the VeriDP server. We implement the VeriDP pipeline with both the CPqD OpenFlow-1.3 software switch [2] and OnetSwitch [12], a hardware SDN switch we previously built. For the software switch, the VeriDP pipeline functions after all actions have been executed on a packet, and before the packet is sent out. For the hardware switch, the VeriDP and OpenFlow pipeline are both implemented using the FPGA resource. Since it requires switches to maintain the sampling instance for each physical flow, we have not yet implemented the sampling components on the hardware switch due to limit of time. ### 4.2 Correctness We use Mininet [4] to emulate a \( k = 4 \) fat tree topology, and use pingall to establish routes between each pair of end hosts. Both the verification server and the Mininet run on the same PC, with Intel i3 3.4GHz CPU and 8GB Memory. **Black Holes.** We initiate a UDP flow from one host \( H1 \) to another host \( H2 \), at a rate of 100 packets/sec. We set up the verification server and set the sampling interval to 0.1 second. At 15.8 seconds, we manually remove the forwarding rule for \( H2 \) from the flow table of a switch on the path, in order to simulate a black hole. The effect is shown in Figure 4(a). **Access Violations.** Suppose \( S \) is the access switch of a host \( H2 \). We manually add an ACL rule to let \( S \) block all packets from another host \( H1 \). Then, we set up the verification server and initiate a UDP flow from \( H1 \) towards \( H2 \). The sampling interval and packet rate is still set to 0.1 second and 100 packets/sec, respectively. At 17.2 seconds, we manually remove the ACL rule from the flow table of \( S \) to simulate an access violation. The effect is shown in Figure 4(b). ### 4.3 Performance **Verification Throughput.** We saturate the verification server with tag reports, and measure how many can the server process per second. For the \( k = 4 \) fat tree, we observe the throughput is around \( 4 \times 10^6 \) verifications/sec. We also use the topology of the Stanford backbone network, which consists of 16 routers and 10 switches. We observe a lower throughput of \( 0.7 \times 10^6 \) verifications/sec. S- \(^1\)Double VLAN tags are supported by 802.1ad [1]; each tag consists of 12 bits VLAN ID, which can be used to carry our data. ### Table 2: Processing delay of the VeriDP pipeline and native OpenFlow pipeline on the hardware switch. <table> <thead> <tr> <th>Packet Size (bytes)</th> <th>128</th> <th>256</th> <th>512</th> <th>1024</th> <th>1500</th> </tr> </thead> <tbody> <tr> <td>VeriDP (( \mu s ))</td> <td>0.19</td> <td>0.20</td> <td>0.20</td> <td>0.20</td> <td>0.19</td> </tr> <tr> <td>Native (( \mu s ))</td> <td>5.62</td> <td>8.63</td> <td>14.65</td> <td>26.69</td> <td>37.88</td> </tr> <tr> <td>Overhead</td> <td>3.41%</td> <td>2.31%</td> <td>1.32%</td> <td>0.73%</td> <td>0.50%</td> </tr> </tbody> </table> ince the verification is still single-threaded without optimization, we expect a higher throughput with multi-threading in the future. For the above verification, we generate the configuration with simple pingall. Therefore, only shortest paths are computed for each pair of hosts, i.e., there is only one entry for each inport-outport pair in the path table, and Algorithm 3 only needs to check one entry. In real networks, there may be multiple paths between each pair of hosts, and packets with different headers can traverse via different paths. Thus, we continue to measure how many linear searches will Algorithm 3 perform in real networks. **Real Network Policies.** We construct the path table with the configuration files of the Stanford backbone network [3] and Internet2 [5]. The Stanford network consists of 757,170 forwarding rules, and 1584 ACL rules; Internet2 has 126,017 forwarding rules without ACL rules. The time to construct the path table is 3830 ms for Stanford network, and 1327 ms for Internet2. We count the number of paths per inport-outport pair. The distribution is reported in Figure 5. We can see that the number of paths for each entry is relatively small, meaning that Algorithm 3 only needs a few time of searches to match the header (if the header can be matched). ### 4.4 Overhead Our implementation of VeriDP on the hardware switch can process packets at line speed (1Gbps). We use simulation to find that it takes 24 clock cycles to tag a packet. As the FPGA has a frequency of 125 MHz, the additional delay is \( 24 \times \frac{1}{125} = 0.192 \mu s \) per hop. Then, we send packets to one port of the switch, receive them from another one, and record the elapsed times. Let \( T1 \) be the elapsed time for a packet with OpenFlow pipeline only, and \( T2 \) be that with the modified switch with OpenFlow+VeriDP pipeline. Then, the processing delay of VeriDP pipeline is \( \Delta T = T2 - T1 \). Table 2 reports the value of \( \Delta T \), \( T1 \), and the overhead \( \Delta T/T2 \), for packet sizes from 64 bytes to 1500 bytes. Table 2 reports the delay of VeriDP pipeline, native OpenFlow pipeline, and the relative overhead. We can see that the delay of VeriDP pipeline is around 0.20 \( \mu s \), agreeing with the simulation results. Besides, the overhead drops when packet size increases, and is strictly less than 5%. ### 5. RELATED WORK Recently there are many verification tools proposed for SDN [11]. We broadly classify them into two groups: control plane verification and data plane verification. **Control Plane Verification.** Some tools are aimed to check the correctness of network configuration files. Anteater [19] models key network invariants (reachability, loop-freedom, black-hole-freedom, etc.) as SAT problems, and uses general solvers to check them. **Header Space Analysis** [15, 14] represents packet headers as points in \( n \)-bit space, and switches as transform functions that operate on the space. By analyzing the composite transform functions of switches, Header Space Analysis can check whether the key invariants are satisfied. **VeriFlow** [16] can incrementally check whether a new rule will violate the network invariants in real time. **NoD** [18] allows operators to check the correctness of net- work configuration at a higher abstraction (termed beliefs). The above tools are orthogonal to VeriDP which checks the compliance of data plane to network policies. They complement VeriDP by ensuring the network polices are correct, a premise for VeriDP to detect bugs. Data Plane Verification. ATPG [25] generates a minimum number of probe packets to trigger all rules in the network. However, it only checks the reception of probe packets, without verifying their trajectories which are vital to configuration correctness. SDN Traceroute [6] enables the SDN controller to trace the trajectory of a flow, also based on probe packets. A limitation of them is that real packets may experience different forwarding behaviors with probe packets, making the verification results less convincing. Packet trajectory tracers like PathletTracer [27], PathQuery [21], and CherryPick [23] let each switch to imprint path information into packet headers, so that packet trajectories can be decoded by the receivers. However, packet trajectories by themselves are not very useful unless we know whether they are correct. In contrast, VeriDP not only traces packet trajectories, but also enables the controller to reason about whether the trajectories are compliant with high-level policies. 6. CONCLUSION AND FUTURE WORK This paper presented VeriDP, a new tool to monitor the policy compliance of SDN data plane. VeriDP checks whether packet forwarding behaviors are agreeing with the network configuration files, based on packet tagging. We implemented VeriDP on both software and hardware switches to demonstrate its feasibility, and used emulation to show it can detect common data plane faults like black holes and access violation. Our future work includes design- ing a fault localization method to pinpoint root causes when policy incompliance is detected. VeriDP not only traces packet trajectories, but also enables the con- troller to reason about whether the trajectories are compliant with high-level policies. Acknowledgement. The authors would like to thank all the anonymous reviewers for their comments. This work is supported by NSFC (No. 61402357), the China Postdoctoral Science Foundation (2015M570835), the Fundamental Research Funds for the Central Universities, and the open project of Science and Technology on (2015M570835), the Fundamental Research Funds for the Central Universities, and the open project of Science and Technology on 7. REFERENCES
{"Source-Url": "http://nskeylab.xjtu.edu.cn/people/huc/Pub/ANCS2016.pdf", "len_cl100k_base": 8412, "olmocr-version": "0.1.53", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 22765, "total-output-tokens": 10177, "length": "2e13", "weborganizer": {"__label__adult": 0.00039577484130859375, "__label__art_design": 0.0004124641418457031, "__label__crime_law": 0.0004642009735107422, "__label__education_jobs": 0.0006198883056640625, "__label__entertainment": 0.00018537044525146484, "__label__fashion_beauty": 0.00018978118896484375, "__label__finance_business": 0.0003046989440917969, "__label__food_dining": 0.0004036426544189453, "__label__games": 0.0010118484497070312, "__label__hardware": 0.0047760009765625, "__label__health": 0.0006384849548339844, "__label__history": 0.0004086494445800781, "__label__home_hobbies": 0.00013446807861328125, "__label__industrial": 0.0007038116455078125, "__label__literature": 0.000301361083984375, "__label__politics": 0.00032782554626464844, "__label__religion": 0.00048422813415527344, "__label__science_tech": 0.381591796875, "__label__social_life": 0.00014102458953857422, "__label__software": 0.047576904296875, "__label__software_dev": 0.5576171875, "__label__sports_fitness": 0.0004019737243652344, "__label__transportation": 0.0007805824279785156, "__label__travel": 0.0002663135528564453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36773, 0.04746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36773, 0.54701]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36773, 0.86137]], "google_gemma-3-12b-it_contains_pii": [[0, 5189, false], [5189, 11717, null], [11717, 17169, null], [17169, 23399, null], [23399, 30777, null], [30777, 36773, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5189, true], [5189, 11717, null], [11717, 17169, null], [17169, 23399, null], [23399, 30777, null], [30777, 36773, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36773, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36773, null]], "pdf_page_numbers": [[0, 5189, 1], [5189, 11717, 2], [11717, 17169, 3], [17169, 23399, 4], [23399, 30777, 5], [30777, 36773, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36773, 0.05357]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
8f3778c4e57b2ce53162fd0cea15bec231e01bd3
[REMOVED]
{"Source-Url": "http://www.researchgate.net/profile/Judit_Planas/publication/226014504_Extending_OpenMP_to_Survive_the_Heterogeneous_Multi-Core_Era/links/09e41508e49af675c4000000.pdf", "len_cl100k_base": 9839, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 42918, "total-output-tokens": 11627, "length": "2e13", "weborganizer": {"__label__adult": 0.0005960464477539062, "__label__art_design": 0.0007801055908203125, "__label__crime_law": 0.0005369186401367188, "__label__education_jobs": 0.0007867813110351562, "__label__entertainment": 0.00017118453979492188, "__label__fashion_beauty": 0.0002841949462890625, "__label__finance_business": 0.0003552436828613281, "__label__food_dining": 0.000518798828125, "__label__games": 0.001255035400390625, "__label__hardware": 0.01265716552734375, "__label__health": 0.0009965896606445312, "__label__history": 0.0006008148193359375, "__label__home_hobbies": 0.0002211332321166992, "__label__industrial": 0.0014638900756835938, "__label__literature": 0.0002815723419189453, "__label__politics": 0.0004470348358154297, "__label__religion": 0.001010894775390625, "__label__science_tech": 0.454833984375, "__label__social_life": 9.608268737792967e-05, "__label__software": 0.0079345703125, "__label__software_dev": 0.51123046875, "__label__sports_fitness": 0.0006060600280761719, "__label__transportation": 0.0016803741455078125, "__label__travel": 0.00034165382385253906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49502, 0.04743]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49502, 0.44909]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49502, 0.88932]], "google_gemma-3-12b-it_contains_pii": [[0, 848, false], [848, 2356, null], [2356, 6133, null], [6133, 9113, null], [9113, 12146, null], [12146, 15068, null], [15068, 15405, null], [15405, 17067, null], [17067, 19011, null], [19011, 21127, null], [21127, 23427, null], [23427, 26331, null], [26331, 29624, null], [29624, 32820, null], [32820, 36376, null], [36376, 38873, null], [38873, 40054, null], [40054, 43449, null], [43449, 46883, null], [46883, 49502, null]], "google_gemma-3-12b-it_is_public_document": [[0, 848, true], [848, 2356, null], [2356, 6133, null], [6133, 9113, null], [9113, 12146, null], [12146, 15068, null], [15068, 15405, null], [15405, 17067, null], [17067, 19011, null], [19011, 21127, null], [21127, 23427, null], [23427, 26331, null], [26331, 29624, null], [29624, 32820, null], [32820, 36376, null], [36376, 38873, null], [38873, 40054, null], [40054, 43449, null], [43449, 46883, null], [46883, 49502, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49502, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49502, null]], "pdf_page_numbers": [[0, 848, 1], [848, 2356, 2], [2356, 6133, 3], [6133, 9113, 4], [9113, 12146, 5], [12146, 15068, 6], [15068, 15405, 7], [15405, 17067, 8], [17067, 19011, 9], [19011, 21127, 10], [21127, 23427, 11], [23427, 26331, 12], [26331, 29624, 13], [29624, 32820, 14], [32820, 36376, 15], [36376, 38873, 16], [38873, 40054, 17], [40054, 43449, 18], [43449, 46883, 19], [46883, 49502, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49502, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
ac9b017e777a53d90646e6a37a141fd511705773
CS 200 Lecture 05 The Web (HTML, CGIs, & CSS) Abbreviations aka Also Known As CSS Cascading Style Sheets CWS Course Web Site Reading The Non-Designer’s Design Book, by Robin Williams: • Chapters 1 – 6 • Pages 117 – 120 on Web Sites Optional Background Reading Learning Web Design (2/e, library reserve) Extracts from HTML & XHTML (5/e) — The Definitive Guide, Chapter 8 on Cascading Style Sheets also available online from the University at http://safari.ora.com > MY BOOKSHELF – Book 27 Western Civilization’s tutorial on CSS properties (for reference) Sitepoint’s CSS Introduction and Documentation (for reference) http://reference.sitepoint.com/css The “Cascading Style Sheet Properties Quick Reference” (for reference) Book 27 CSS Pocket Reference (3/e), by Eric Meyer, O’Reilly & Associates (book store) Please read and high-light, before lab: - Assignment 5: Due Monday February 10th at 11:59 pm - This week’s slides There are hyper-text commented source files for many of the web pages used in this lecture - Handouts / Commented HTML on the CWS https://www.student.cs.uwaterloo.ca/~cs200/#handouts Major topics today - read and recall pearl - the client-server paradigm - putting HTML in context - relative vs absolute URLs - tables as a layout tool - forms - styles in HTML (especially cascading style sheets – CSS) Today’s lecture assumes an elementary understanding of HTML tags & attributes (eg. from CS 100) - if you lack it, see Learning Web Design, or https://www.codecademy.com/learn(paths/web-development New applications for this week - BBEdit a text editor - EditiX a cross-platform XML editor - StyleMaster a cross-platform CSS editor - You can use any text editor you’d like, though some are nicer than others for various reasons This week’s lecture builds on the preceding weeks’ material: - tables - styles - graphics The Read ‘n Recall Pearl Dialogs and Menus - Menus - Dialogs - Toolbars & Palettes Documentation - The user manual - Online help - Online tutorials Assumptions You have an understanding of: • Tables • Styles • Indirection Things to Think About • How does the manipulation of data objects differ from other applications? • Is there more than one way to manipulate a data object? Client-Server File Sharing EG the AppleShare file Server on a local Mac Server • your lab Mac is the “client”: slower, cheaper, smaller disks • student.cs is the “server machine”: faster, more expensive, bigger disks • AppleShare is the “server application,” running on the server machine • “share points” are folders on the server that are made available over the network • “network disks” • “mounting” a share point (use the Finder’s Go > Connect to Server... menu item) creates a “network disk” on your client machine • an icon appears on the desktop, just as for a local disk • use a network disk just like a local disk, although it’s a bit slower • “network folders” • these are subfolders (aka subdirectories) of a network disk • unlike the other terms on this slide, “network folder” is a CS200-invented term As distinct from “peer-to-peer” file sharing The Web (like file-sharing) has a client-server model Many of the machines on the Internet are “web servers” - any machine (“client”) on the web can ask them for data - actually, they’re asking a particular application running on that machine for data (which is identified by a “port”) The client uses a “browser” to request & display web pages - eg. Firefox, Safari, Chrome, Explorer, ... - browsers decide how to render the HTML, based on - HTML tags found in the document - what kind of display is available - user preferences - browsers are often (but not always) consistent in how they do this For security - a web server can only return files in the “server subtree” - sometimes that’s rooted in the folder holding the server app - usually this “web root folder” can be set when the web server is started The default Mac OS X web server application (Apache) is at /usr/sbin/httpd; the default web document root folder is at /Library/WebServer/Documents/. Data Returned By Web Servers The files returned can be - web pages - pictures - other stuff... A “web page” is a TEXT file containing - text to be displayed (text “elements”) - “tags” - eg <html> and <p> that control presentation of the text—they’re really styles - “links” containing “URLs” (eg <a href="• • •"> and <img src="• • •"> - to other web pages - to graphics, for display on the page - to sounds, to be played when the page is viewed - etc—add post-install “plugins” to handle new file types (/Library/Internet Plug-Ins/ or ~/Library/Internet Plug-Ins/ on Macs) What you see here are “property tags” but they could equally well be (named) style tags Strip out all the formatting codes to get a “text” or “ascii” file A Toy Web Page The HTML for this web page <HTML><HEAD><TITLE>Jen's Fake Home Page</TITLE></HEAD><BODY bgcolor="#FFFFFF"><IMG SRC="star.gif"><IMG SRC="JenBanner.gif"><CENTER><H2>Welcome to my Web page</H2></CENTER><IMG SRC="Exclamation.gif" ALIGN=left HSPACE=6><P><STRONG>Warning!</STRONG> This is not my <EM>real</EM> home page. It's just a little something I made up for the occasion. But just in case you're interested, I'll tell you a bit about me.</P><HR><H2>Places I've Lived</H2><UL><LI>Akron, OH</LI><LI>Hudson, OH</LI><LI>South Bend, IN</LI><LI>Boston, MA</LI></UL><P style="font-size:80%">(Adapted from "Designing for the Web - Getting Started in a New Medium" by Jennifer Niederst and Edie Freedman.)</P></BODY></HTML> Yuck! Recall that browsers ignore - multiple blanks - carriage returns - blank lines Use these to make your HTML more readable! - You’ll lose marks in CS 200 if you don’t - You’ll make your life difficult if you don’t - It will be very difficult for anyone else to read and use your code if you don’t The HTML for our Toy Web Page, Readably Formatted <HTML> <HEAD> <TITLE>Jen's Fake Home Page</TITLE> </HEAD> <BODY bgcolor="#FFFFFF"> <IMG SRC="star.gif"><IMG SRC="JenBanner.gif"><IMG SRC="star.gif"> <CENTER><H2>Welcome to my Web page</H2></CENTER> <IMG SRC="Exclamation.gif" ALIGN=left HSPACE=6> <P><STRONG>Warning!</STRONG> This is not my <EM>real</EM> home page. It's just a little something I made up for the occasion. But just in case you're interested, I'll tell you a bit about me.</P> <HR> <H2>Places I've Lived</H2> <UL> <LI>Akron, OH</LI> <LI>Hudson, OH</LI> <LI>South Bend, IN</LI> <LI>Boston, MA</LI> </UL> <P style="font-size:80%"> (Adapted from "Designing for the Web - Getting Started in a New Medium" by Jennifer Niederst and Edie Freedman.) </P> </BODY> </HTML> Discussion Points HTML is stored in “[ASCII] text files” HTML tags as (named) styles • whose definitions are supplied by the browser • same web page, different browser, genetically similar (but not identical) appearance... Always use closing tags (eg </P>, </LI>) • otherwise the browser must guess their location • be prepared for XHTML, XML & CSS <HEAD> ... </HEAD> • <HEAD> does not mean "header" • <HEAD>... </HEAD> contain information about the page • eg. <meta name=description value="a paragraph"> • eg. <meta name=author value="Bugs Bunny"> • eg. <title>... </title> ... shows up in most browsers’ title bar browsers use ... to label bookmarks index engines give words in ... extra weight <BODY>... </BODY> contain info displayed in the page Browsers also ignore tags they don’t recognize • eg. tags you misspell • this is actually a feature • so newly invented tags don’t screw up old browsers • so IE-specific tags don’t screw up Chrome, & vice-versa • etc • but it makes debugging HTML harder • therefore… when a tag doesn’t seem to have any effect • suspect misspelling • Validation – see the assignment for details • that’s what the <!DOCTYPE ...> magic incantation is for (see next slide) https://www.w3schools.com/tags/tag_doctype.asp Upper case vs lower case • who cares? • HTML is case-insensitive: <TITLE>...</title> works fine • XHTML requires that tags be lower case • XML is case-sensitive • suggestion: use lower case A Simple HTML Table <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Mark Report</title> </head> <body> <h2>Top CS200 Marks</h2> <table border="1" align="center"> <tr align="center"> <th>ID Number</th> <th>Final Grade</th> </tr> <tr align="center"> <td>94010203</td> <td>81%</td> </tr> <tr align="center"> <td>98102030</td> <td>75%</td> </tr> <tr align="center"> <td>96000123</td> <td>67%</td> </tr> </table> </body> </html> For a list of valid doctypes, see http://www.w3.org/QA/2002/04/valid-dtd-list.html **HTML Table Tags** <table> <thead> <tr> <th>Tag</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>&lt;table&gt;</code></td> <td>surround the entire table</td> </tr> <tr> <td><code>&lt;tr&gt;</code></td> <td>surround a table row</td> </tr> <tr> <td><code>&lt;td&gt;</code></td> <td>surround a table (cell) definition</td> </tr> </tbody> </table> By default, a table and its cells are as wide as they need to be. For details, see: - HTML The Definitive Guide (library reserve) - PageSpinner Help Tables as a (deprecated) all-purpose layout tool in HTML - (Hmm. Are tables a useful layout tool in word processors?) **HTML Tables can be nested** and HTML cells can be “merged” - horizontally (colspan="n") - and/or - vertically (rowspan="n") - no L-shaped regions, however Mark Twain Samuel Langhorne Clemens (November 30, 1835 - April 21, 1910), better known by his pen name Mark Twain, was an American humorist, satirist, novelist, writer, and lecturer. Although Twain was confounded by financial and business affairs, his humor and wit were keen, and he enjoyed immense public popularity. At his peak, he was probably the most popular American celebrity of his time. In 1907, crowds at the Jamestown Exposition thronged just to get a glimpse of him. He had dozens of famous friends, including William Dean Howells, Booker T. Washington, Nikola Tesla, Helen Keller, and Henry Huttleston Rogers. Fellow American author William Faulkner is credited with writing that Twain was “the first truly American writer, and all of us since are his heirs.” Twain died in 1910 and is buried in Elmira, New York. Pen Names Clemens maintained that his primary pen name, “Mark Twain,” came from the years working on Mississippi riverboats, where two fathoms (12 feet, approximately 3.7 meters) or “safe water” was measured on the sounding line. The riverboatman’s cry was “mark twain” or, more fully, “by the mark twain” (“twain” is an archaic term for two). “By the mark twain” meant “according to the mark on the line, [the depth is] two fathoms.” Clemens provides a footnote to Chapter 8 (“Perplexing Lessons”) of Life on the Mississippi where he explains “mark twain” as “two fathoms” and “Mark three is three fathoms.” The name may also have come from his wilder days in the West, where he would buy two drinks and tell the bartender to “mark twain” on his tab. From the online encyclopedia Wikipedia, at http://en.wikipedia.org/wiki/Mark_twain. --- CS 200 Winter 2020 Previewing your Webpage • Save your work in the text editor • Open the same file up in a web browser • Every time you save the text file, refresh the webpage • For example... Text Wrangler | 1 | <doctype html> | | 2 | </html> | | 3 | </head> | | 4 | </body> | | 5 | <p>Hello world!</p> | | 6 | </body> | | 7 | </html> | Chrome Hello world! • Some text editors or in browser text editors allow you to preview your page within the app • It’s recommended to also preview it in common web browsers to make sure it works where others will see it Chrome’s Inspect Most browsers allow you to view the page source, which will help you with debugging your own webpages. Safari’s > Show Inspector Universal Access: □ Never use font sizes smaller than 14 □ Press Tab to highlight each item on a webpage Option-Tab highlights each item. Style sheet: _jcbDocs.css Proxies: □ Change Settings... ✓ Show Develop menu in menu bar Good evening, my name is Robby, and I'll be your host for this evening. It's been said that I should just stop with this whole Web developer stuff and go back to pixies so I try to keep the source code very simple and easy to understand. I mean, we all have our personal taste, but we're all adults here, right? The Web stuff we eat. Take mushrooms, for example. They're kind of a problem because it's "nasty." They're shaped like little umbrellas, but you have to be careful with them. Most varieties are grown with liberal quantities of what I like to call "fungus," which our mothers always warned would kill us. Heck, some of these mushrooms that so many of us like? But you know what really bothered me? Fungus. I understand that grasshoppers can be very tasty, too, so I asked my mother about it. She said, "Fungus? I don't know. Maybe the morel is that one?" She was talking about mushrooms, of course. I mean, it could be the mushroom-eaters from another planet who are secretly planning to take over the world! Just think about it. I'm sure when I'll get my instructions from the mothership... URLs — Uniform Resource Locators - http is the “protocol” - jcbServer is the server’s “local name” - cs.uwaterloo.ca is the “domain” in which the server is located - jcbServer.cs.uwaterloo.ca is the server’s “host domain name” - 80 is the “port” on which jcbServer’s web server application is listening - /cs200/search/search.html is the “absolute path” from the server’s web root folder to the file Another Example URL Another example: ```html <a href="fragments/bio/biography.html">John C Beatty</a> ``` fragments/bio/biography.html - is a “relative path” to the target file - starting in the folder containing the page holding the link - (note the LACK of an initial “/” and host domain name) ```html <img src=...> ``` works the same way Relative vs. Absolute Paths (1) The web root folder - a web server can only return files in the “server subtree” - sometimes that’s rooted in the folder holding the server app - sometimes this “web root folder” can be set elsewhere A link in CS100.html to CS200.html could be written as - `<a href="/2nd%20Year/cs200/cs200.html">CS200</a>` - Note the initial slash - This is an “absolute path” - The host domain name is implicit - ie the same as the referencing web page Or as - `<a href="../2nd%20Year/cs200/cs200.html">CS200</a>` - Note the initial “../” - This is a “relative path” - “../” means “go up one level to the parent folder” - “../../” means “go up two levels,” etc - note: for security reasons, the web server application will prevent the path from rising above the web root folder! - Note that cs200/cs200.html is also a relative path Or using an explicit host domain name AND an absolute path - `<a href="http://jcbServer.uwaterloo.ca/2nd%20Year/cs200/cs200.html">CS200</a>` It makes no sense to combine a host domain name and a relative path —what folder would the path be relative to? When to use relative vs. absolute paths Absolute paths - always start at the web root folder - are necessary between machines - if a host domain name is present, the path is necessarily absolute Relative paths - start at the document containing the link - make it MUCH easier to move web pages around as a group eg. if all the cs200 pages use relative URLs amongst themselves then I can move the cs200 subtree somewhere else (like to another server) without breaking the links between the cs200 pages - but if a file in the group links to a file on the same machine not in the group it must use an absolute file path (or the group can’t be moved w/o breaking that link) IMPORTANT - you MUST write "%20" instead of a space in URLs - and look out for trailing blanks So within a web site... - use a relative path when the two files are more likely to be moved together (eg. a page & an image in it) - use an absolute path when the two files are more likely to be moved separately Forms An HTML form is a web page with “[interface] widgets” for supplying data: - text edit boxes - check boxes - radio buttons - pop-up menus Forms and CGIs Web Servers can’t know in advance: - what data will be sent to them from forms - what should be done with it So there’s a convention (the “Common Gateway Interface”): - for identifying the particular application - to which form data should be sent for processing Actually, the CGI scheme is more general than this: - a web server can run any application and return its output When the submit button is pressed - the data is sent to a web server - the web server forwards the data to a “cgi” (a separate program) - the cgi processes the data & returns a web page to the server - the server passes that response on to the browser Plug-ins The CS 200 Request Marks Form (simplified) <HTML> <HEAD> <TITLE>Request Your Marks in CS 200</TITLE> </HEAD> <BODY> <P>To retrieve your marks to date in CS 200, please enter your last name (case doesn't matter) and student ID in the boxes shown below.</P> <P>Then click on the <STRONG>Fetch Marks</STRONG> button - ordinarily it shouldn't take more than thirty seconds or so for your marks to come back. (Please be patient - I'm only a Mac IIfx!)</P> <P>If you find a discrepancy, please notify the course tutor as soon as possible.</P> <FORM ACTION="http://.../ReportMarks.cgi" METHOD="GET"> <P> <STRONG>Your Last Name: </STRONG> <INPUT TYPE="text" NAME="surname" SIZE="33"> </P> <P> <STRONG>Your ID Number: </STRONG> <INPUT TYPE="text" NAME="idnumber" SIZE="33"> </P> <P> <INPUT TYPE="submit" VALUE="Fetch Marks"> </P> <P> <INPUT TYPE="hidden" NAME="course" VALUE="cs200"> </P> </FORM> </BODY> </HTML> Note the use of a “hidden parameter” that the user never sees so that forms for different courses can use the same cgi. What Gets Sent to the Server (GET) ``` GET ../ReportMarks.cgi?course=cs200&surname=Daly&idnumber=00000000 HTTP/1.0 Connection: Keep-Alive User-Agent: Mozilla/4.06 (Macintosh; U; PPC, Nav) Host: jcbServer.cs.uwaterloo.ca Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, image/png, */* Accept-Encoding: gzip Accept-Language: en Accept-Charset: iso-8859-1,*,utf-8 ``` The rules for this stuff are part of the “http protocol.” What comes back ![Mark Report](image) ../ReportMarks.cgi locates the program (a “cgi”) to which the server forwards the form’s data Notice - that the URL from which a web page came always appears in the location bar - that the forms data is encoded in the URL - how that URL appears in what’s sent to the server - why the path to the cgi had better not contain a question mark! POST Actions The request marks form could have said ```html <Form ACTION="http://../ReportMarks.cgi" METHOD="POST"> ``` In which case the forms data would be transferred a bit differently ``` POST ..../ReportMarks.cgi HTTP/1.0 Connection: Keep-Alive User-Agent: Mozilla/4.06 (Macintosh; U; PPC, Nav) Host: 192.168.1.100 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, image/png, /*/ Accept-Encoding: gzip Accept-Language: en Accept-Charset: iso-8859-1,*,utf-8 Content-type: application/x-www-form-urlencoded CONTENT-LENGTH: 43 course=cs200&surname=Daly&idnumber=00000000 ``` The details of the difference between POST and GET are not important to us But you do need to know that: - POST is necessary for large quantities of data — say > 256 characters - POST'ed form data does not appear in the URL and therefore cannot be bookmarked To retrieve your marks to date in CS 200, please enter your last name (case doesn't matter) and student id in the boxes shown below. Then click on the **Search** button - ordinarily it shouldn't take more than thirty seconds or so for your marks to come back. (Please be patient - I'm only a Mac IIfx!) If you find a discrepancy, please notify the course tutor as soon as possible. <FORM ACTION="http:../ReportMarks.cgi" METHOD="POST"> <INPUT TYPE="hidden" NAME="course" VALUE="cs200"> <TABLE> <TR> <TD><STRONG>Your Last Name:</STRONG> <TD><INPUT TYPE="text" NAME="surname" SIZE="33"> </TD> </TR> <TR> <TD><STRONG>Your ID Number:</STRONG> <TD><INPUT TYPE="text" NAME="idnumber" SIZE="33"></TD> </TR> <TR> <TD COLSPAN="2"> <STRONG>Which:</STRONG> <INPUT TYPE="checkbox" NAME="which" VALUE="assign"> Assignments <INPUT TYPE="checkbox" NAME="which" VALUE="midterm"> Midterm <INPUT TYPE="checkbox" NAME="which" VALUE="final"> Final <INPUT TYPE="checkbox" NAME="which" VALUE="course"> Course Mark </TD> </TR> <TR> <TD ALIGN=left><INPUT TYPE="submit" VALUE="Fetch Marks"></TD> </TR> </TABLE> </FORM> </CENTER> </BODY> </HTML> Styles in HTML Most tags are named styles: `<strong>`, `<em>`, `<p>`, `<ol>`, ... - these are "logical tags" - browsers decide how to render them Although a few are not: `<b>`, `<i>` - these are "physical tags" - browsers have no choice Generally speaking - the browser, not the web author, controls layout - browsers sometimes behave differently But appearance is important! Controlling HTML Layout The response - abusing tables and frames - new tags and attributes - proprietary tags and attributes - postscript and pdf (Adobe’s “Portable Document Format”) via “plugins” Wouldn’t it be nice ... - if HTML had USER-DEFINED [named] styles? - like those we find in word processors? - that authors could use to control layout? Cascading Style Sheets Style definitions may appear at the beginning of a web page. There are five style definitions in this example: • the first specifies the default font to be used • the second and third attach default “properties” (ie attributes) to <SPAN> and <P> • the last two define property bundles that can be applied to <P>, by using classes These are called “selectors” This paragraph will be centered. This paragraph will be centered and bold. This paragraph will be centered and italic. This word will be green. Or ... style definitions may be EXTERNAL to a web page - so that multiple web pages can use them - much more important for HTML than for word processing An example style sheet file named simple.css An example html file that uses simple.css Notice that all four paragraphs are centered and in Myriad Pro - The contents of `<P class=A>...</P>` and `<P class=B>...</P>“inherit” the properties of `<P>`, so they don’t have to explicitly set them - Inner elements inherit properties from outer elements containing them (when that makes sense) — eg the `<P>...</P>` inherit Myriad Pro from the `<BODY>...</BODY>` in which they appear A Larger Example – Task B from the Winter 98 Lab Exam Sample Lab Exam - Task B This page is best displayed with Internet Explorer 4 or 5; Netscape 4 gets some of the leading wrong... To use computers effectively it is important to select an appropriate tool for each task; more often than not this involves using several tools to solve a problem, passing data from one tool to the next as you work. This task is an example of such. The problem. As a teacher, you often want to look at a bar chart (like the example on the right) showing the distribution of marks for a course -- how many people received marks between 70 and 75, how many between 75 and 80, etc. The solution. You have a FileMaker table containing the final grades for each student, but databases aren’t good at making graphs, and you often want to include such graphs in end-of-term reports for your department chair. So you need to transfer data between FileMaker, Excel, and Microsoft Word as you work. Because you and your colleagues do this often, you want to work out a convenient and efficient way of doing so. Password-protected demonstration solutions may be found in the Demo Solutions subfolder of the 200exams disk on jcbServer. Copy them to your personal subfolder of 200exams before running them. Demo Course Grades. When the Do It! button on the Choose layout is clicked, grades for the course currently selected by the Which Course popup are written into a file called Data in the same folder. Demo Make Histograms. Click the Get Data button in any of the worksheets to open the Data file, copy its contents into the Grades worksheet, and then close it Note the period that begins each style name • these are called classes • such styles can be used in any tag (if they make sense...) <DIV> is (effectively) a replacement for <P> • with no default properties • (some properties of built-in tags like <P> can’t be over-ridden) [still?] • <div> and <p> are examples of “block level tags” or elements (ie. they cause a line break) <SPAN> is an “inline level tag” (aka an “inline element”) • with no built-in properties • does not cause a line break • <strong>, <em>, <img> and <a> are other examples “STYLE” is used to set CSS attributes for a particular tag only • eg. <div style="color:green">blah, blah, blah...</div> • eg. <p style="color:green">good stuff</p> • eg. <li style="color:green">clear desk</li> • eg. <span style="color:green">Good work!</span> Block level tags generate automatic line breaks before & after. Inline tags do not; one can follow another on the same line. --- LabExam.css Note the period that begins each style name • these are called classes • such styles can be used in any tag (if they make sense...) <DIV> is (effectively) a replacement for <P> • with no default properties • (some properties of built-in tags like <P> can’t be over-ridden) [still?] • <div> and <p> are examples of “block level tags” or elements (ie. they cause a line break) <SPAN> is an “inline level tag” (aka an “inline element”) • with no built-in properties • does not cause a line break • <strong>, <em>, <img> and <a> are other examples “STYLE” is used to set CSS attributes for a particular tag only • eg. <div style="color:green">blah, blah, blah...</div> • eg. <p style="color:green">good stuff</p> • eg. <li style="color:green">clear desk</li> • eg. <span style="color:green">Good work!</span> Block level tags generate automatic line breaks before & after. Inline tags do not; one can follow another on the same line. --- LabExam.css Note the period that begins each style name • these are called classes • such styles can be used in any tag (if they make sense...) <DIV> is (effectively) a replacement for <P> • with no default properties • (some properties of built-in tags like <P> can’t be over-ridden) [still?] • <div> and <p> are examples of “block level tags” or elements (ie. they cause a line break) <SPAN> is an “inline level tag” (aka an “inline element”) • with no built-in properties • does not cause a line break • <strong>, <em>, <img> and <a> are other examples “STYLE” is used to set CSS attributes for a particular tag only • eg. <div style="color:green">blah, blah, blah...</div> • eg. <p style="color:green">good stuff</p> • eg. <li style="color:green">clear desk</li> • eg. <span style="color:green">Good work!</span> Block level tags generate automatic line breaks before & after. Inline tags do not; one can follow another on the same line. --- LabExam.css Note the period that begins each style name • these are called classes • such styles can be used in any tag (if they make sense...) <DIV> is (effectively) a replacement for <P> • with no default properties • (some properties of built-in tags like <P> can’t be over-ridden) [still?] • <div> and <p> are examples of “block level tags” or elements (ie. they cause a line break) <SPAN> is an “inline level tag” (aka an “inline element”) • with no built-in properties • does not cause a line break • <strong>, <em>, <img> and <a> are other examples “STYLE” is used to set CSS attributes for a particular tag only • eg. <div style="color:green">blah, blah, blah...</div> • eg. <p style="color:green">good stuff</p> • eg. <li style="color:green">clear desk</li> • eg. <span style="color:green">Good work!</span> Block level tags generate automatic line breaks before & after. Inline tags do not; one can follow another on the same line. To use computers effectively it is important The problem. As a teacher, you often want The solution. You have a FileMaker table Sample Lab Exam - Task B Password-protected demonstration solutions Demo Course Grades. When the Do It! button on the Choose layout Demo Make Histograms. Click the Get Data button in any The charts in By 5 and By 10 can You are to duplicate the behaviour (The relative weight of important pieces Password-protected demonstration solutions may be found in the Demo Solutions subfolder of the 200exams disk on jobServer. Copy them to your personal subfolder of 200exams before running them. Demo Course Grades. When the Do It button on the Choose layout is clicked, grades for the course currently selected by the Which Course popup are written into a file called Data in the same folder. Demo Make Histograms. Click the Get Data button in any of the worksheets to open the Data file, copy its contents into the Grades worksheet, and then close it. Another Example, from “Eric Meyer on CSS” much room to think Sometimes I find myself wondering about the stuff we eat. Take mushrooms, for example. They’re kind of rubbery, like squid, which many of us won’t eat because it’s "nasty." They’re shaped like little umbrellas, but you probably wouldn’t want one in your piña colada. Most varieties are grown with liberal quantities of what I like to term "managerial output." And they’re fungal, which our mothers always warned would kill us. Heck, some mushrooms actually will kill you. So what is it about mushrooms that so many of us like? I suppose it’s the taste which appeals to some of us. I understand that grasshoppers can be very tasty, too, so who am I to judge? People will eat all manner of weird stuff. But fungus? I don’t know. Maybe the morel is that those of us who eat mushrooms aren't really "us," if you see what I mean. It could be that the mushroom-eaters of the world are really some bizarre fungus-based aliens who are secretly planning to take over the world! Just like on The X-Files! If that's the case, though, I wonder when I'll get my instructions from the mothership. a fun guy Good evening, my name is Rootsy, and I'll be your host for this evening. It's been said that I should just stop with this whole Web thing, mostly by Grandma, but she still thinks computers are run by evil pixies so I try to keep the source in mind. I accept that some of the humor on this page may be in spore taste, but we're all adults here and it made me laugh to write this stuff down so maybe you'll like it as well. http://jcbserver.uwaterloo.ca/cs200/ericMeyer/ericMeyer.html body { background: rgb(153,102,51); color: black; font-family: 'Myriad Pro', sans-serif; } div { background: rgb(251,233,198); color: rgb(122,74,26); margin: 0 2em; } p { margin: 0; padding: 0.25em 1em 0.25em 1em; text-indent: 1.25em; line-height: 120%; } h1, h2 { margin: 0; padding: 0.25em 0.5em 0.25em 0.5em; } div#p1 { margin: 0 2em 0 10em; } div#p2 { margin: 0 10em 0 2em; } div#menu { float: right; width: 5em; padding: 0; margin: 0 -1.5em 0.25em 0.5em; border: 3px solid rgb(50,50,175); background: white; } div#menu a { display: block; text-align: center; padding: 0.2em 0.5em 0.2em 0.5em; } div#footer { margin: 0 11em 0 2em; padding: 0.2em 0 0.5em 0; text-align: center; font-style: italic; color: rgb(128,128,128); } Other Selectors What we've seen - `p { • • • }` sets attributes for all `<p>` tags - `p.name` sets attributes for all `<p class="name" • • • >` tags - `p#name` sets attributes for the only `<p id="name" • • • >` tag There are a variety of useful selectors we haven’t discussed, including - `h1, h2, h3 { • • • }` grouped selectors (same attributes for multiple tags) - `p[title] { • • • }` attributes for all tags `<p title= • • • >` (i.e., `<p>` tags having a title attribute) - `p[title="important"] { • • • }` attributes for all tags `<p title="important" • • • >` - `h1 > strong { • • • }` attributes for `<strong>` appearing “immediately” within an `<h1> eg `<h1>This is <strong>very</strong> important.</h1>` - `h1 + p { • • • }` attributes for any `<p>` that immediately follows an `<h1>` eg `<h1>Section A</h1><p>For this para.</p><p>But not this one.</p>` - `tr > td:first-child` attributes for `<td>` when it is the first child of a `<tr>` eg `<tr> <td>matches this</td> <td>but not this</td> </td>` and various combinations of these. Effectively, they do “pattern matching.” See Chapter 2 of “CSS The Definitive Guide” if you’re curious. (You’re not expected to memorize these for CS 200—this is useful “read and recall [someday] info”) CS 200 Winter 2020 Notice that attributes can come from more than one style definition - eg from `div.wrap {...} and div#p1 {...}`, applied to `div class="wrap" id="p1"> - when that happens, they’re merged - if there’s a conflict, “the more specific wins” eg p1 over wrap because only one tag can have id=p1 Classes vs. IDs Classes • can be used any number of times • defined like: .red { color: red; } • the . indicates that it's a class • this can now be applied to other styles • eg. <p class="red"> will have all the attributes from <p> and all the attributes from .red • If you want a class that can ONLY be applied to the <p> tag (and not to any other tags), defined it like: p.red { color: red; } • You can still use <p class="red"> with the same results as before, but <div class="red"> has no meaning IDs • should only be used once (for use with JavaScript – you don't need to know why) • defined like: #red { color: red; } • this can now be applied to other styles • eg. <p id="red"> will have all the attributes from <p> and all the attributes from #red So what should you use? • there are reasons to use both classes and IDs Discussion The “class” attribute can be used by many tags, which share its meaning The “id” attribute is supposed to uniquely identify a tag (ie only be used once) Both specify a style to use on their content Style sheets can come from • the web page author • the user, who can often specify a style sheet in the browser’s preferences • the browser (ie. built-in) — And this is their order of priority (from high to low) when a conflict arises for a particular attribute Browsers are finicky about CSS syntax • if your CSS seems to have no effect, check for syntax errors / use a CSS validator The detailed rules for resolving conflicts are (considerably) more complicated; • see Section 8.1.9 of “HTML & XHTML - The Definitive Guide,” 5/e, for a fuller explanation • or Chapter 3, “Structure and the Cascade,” in Cascading Style Sheets - The Definitive Guide for details — but you shouldn’t need to Note: there’s a lot more to CSS than we’ve had time to talk about For More (Optional) Information on CSS Western Civilization’s • “Complete CSS Guide” at • “Properties Introduction” (the Complete CSS Guide’s section on CSS attributes) at HTML & XHTML — The Definitive Guide, 5/e, by Musciano & Kennedy, Chapter 9, “Cascading Style Sheets” • on reserve in the library (an earlier edition, without “XHTML” in the title • or at http://safari.ora.com > MY BOOKSHELF – Book 26 from a University computer Chapter 8 “Cascading Style Sheets” & the appendix “Cascading Style Sheet Properties Quick Reference” • (the 6th edition was published in October of 2006) Cascading Style Sheets — The Definitive Guide, by Eric Meyer • or the 2/e at http://www.safari.ora.com > MY BOOKSHELF – Book 9 from a University computer Typetester: a neat web page (w/Javascript+CSS) for comparing various fonts side-by-side in your browser • typetester.maratz.com XyleScope: a neat tool for dissecting the CSS in pages you encounter on the web ($20, Mac only) • www.culturedCode.com/xyle/ The “Lorem ipsum... generator” at http://www.lipsum.com/ Common Sources of Confusion in the Lab You can use Firefox, Chrome or Safari’s File > Open File... menu item • HOWEVER File > Open File... is not as fussy about paths as most web servers – it won’t complain about spaces in URLs – it won’t complain about trailing blanks in file names • So you MUST also ask student.cs.uwaterloo.ca web server to access your web pages to be certain that the URLs in them work www.student.cs.uwaterloo.ca/YourUsername/root.html Browsers “cache” (most) pages you have browsed on your local disk • When you’ve changed the contents of a page and saved it to your network disk from your text editor, option-click the Reload button or to ensure your browser REALLY gets the new version And a word of advice ... use • closing tags (eg </TD>, </P>) • indentation • blank lines to structure your HTML — it makes debugging much easier Finally... Remember that `<head>` does NOT mean `<header>` - `<head> ... </head>` - enclose information ABOUT the web page - that is not displayed IN the page - `<style> ... </style>` illustrates this better than `<title>...</title>` Most browsers have a “View > Source...” menu item that will show you the HTML source for the page currently being displayed Warning - CSS is still not perfectly implemented by contemporary browsers, although the situation is much better now than it was a few years ago - So use the latest release of whatever browser you like when experimenting with Cascading Style Sheets - Also, StyleMaster has lots of info about browser quirks & bugs <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/css" href="ToyTable.css"?> <doc> <xmlTable> <row> <cell>X</cell> <cell>–</cell> <cell>O</cell> </row> <row> <cell>–</cell> <cell>–</cell> <cell>O</cell> </row> <row> <cell>O</cell> <cell>–</cell> <cell>X</cell> </row> </xmlTable> <example> This text is centered WITHIN the space occupied by the <example> ... </example> block. </example> </doc> ToyTable.css doc { font-family: "Comic Sans MS", sans-serif; font-size: 20pt; } xmlTable { display: table; color: black; background-color: yellow; margin-top: 1em; margin-left: auto; margin-right: auto; } row { display: table-row; } cell { display: table-cell; padding-right: 0.2em; padding-left: 0.2em; padding-top: 0.2em; padding-bottom: 0.2em; width: 1.5em; text-align: center; } example { display: block; padding: 0.5em; background-color: cyan; margin-left: 0%; margin-right: 50%; text-align: center; } Use LR margins of auto to center a block-level element within its parent. Use text-align: center to center text within an element. <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/css" href="ToyTable.css"?> <doc> <xmlTable> <row> <cell>X</cell> <cell>–</cell> <cell>O</cell> </row> <row> <cell>–</cell> <cell>–</cell> <cell>O</cell> </row> <row> <cell>O</cell> <cell>–</cell> <cell>X</cell> </row> </xmlTable> <example> This text is centered WITHIN the space occupied by the <example> ... </example> block. </example> </doc> XML—w/o the <?xml-stylesheet ... ?> <?xml version="1.0" encoding="UTF-8"?> <doc> <xmlTable> <row> <cell>X</cell> <cell>--</cell> <cell>O</cell> </row> <row> <cell>--</cell> <cell>--</cell> <cell>O</cell> </row> <row> <cell>O</cell> <cell>--</cell> <cell>X</cell> </row> </xmlTable> <example> This text is centered WITHIN the space occupied by the <example> ... </example> block. </example> </doc> CSS + JavaScript To Change The Default Font Example: changing the font locally - local: http://127.0.0.1/cs200/switchingFontsWithJavascript/JensHomePage.html The Javascript for this Example <HTML> <HEAD> <TITLE>Jen's Fake Home Page</TITLE> </HEAD> <BODY id="all" bgcolor="#FFFFFF" style="font-family: Arial, sans-serif;"> • • • <h2>Choose A Font</h2> <p> <form> <input type="radio" name="chooseFont" checked onclick="elt = document.getElementById('all'); elt.style.fontFamily='Arial, sans-serif';"> Arial <input type="radio" name="chooseFont" onclick="elt = document.getElementById('all'); elt.style.fontFamily='Palatino, serif';"> Palatino <input type="radio" name="chooseFont" onclick="elt = document.getElementById('all'); elt.style.fontFamily='Comic Sans MS, cursive';"> Comic Sans MS </form> </p> </BODY> </HTML> Notice that the CSS attribute "font-family" becomes "fontFamily" in Javascript because "-" is illegal in a variable name. Think about Excel scripting as you read this JavaScript. CSS + JavaScript To Switch Style Sheets Example: switching style sheets dynamically - local: http://127.0.0.1/cs200/switchingStyleSheets/JensHomePage.html See JensHomePage.html and the javascript source file global.js in .../cs200/switchingStyeSheets/ for details if you’re curious. (Note that the color of the “Warning!” paragraph is chosen randomly.) CSS + JavaScript Adding a little JavaScript to CSS (of the sort covered in CS 100) • dynamically changing CSS attributes • collapsing menus • absolute positioning of layers (remember Photoshop & Illustrator?) • and much, much more Example: • local: http://127.0.0.1/dynamicMenus.html • remote: http://jcbServer.cs.uwaterloo.ca/cs200/ericMeyer/dynamicMenus.html The CSS for this Example ```html <html> <head> <title>An Illustration of Dynamic Fonts &amp; Menus</title> <style type="text/css"> <!-- body { font-family: sans-serif; } #menuOne { display: none; } #menuTwo { display: none; } #title { font-size: 15pt; font-weight: bold; margin-bottom: 15pt; text-align: center; } #content { position: absolute; left: 2.5in; top: 0.5in; background-color: #FFFF00; padding: 25px; } --> </style> </head> </html> ``` The Content for this Example <body> <div id="title">An Illustration of Dynamic Fonts & Menus</div> <p>A href="#" onclick="changeFontSize(+3);">Larger Title</a></p> <p>A href="#" onclick="changeFontSize(-3);">Smaller Title</a></p> <h3>a href="#" onclick="toggleMenu('menuOne');">UofW (+/-)</a></h3> <ul id="menuOne"> <li><a href="#" onclick="setContentTo('http://www.cs.uwaterloo.ca/');">CS</a></li> <li><a href="#" onclick="setContentTo('http://jcbServer.cs.uwaterloo.ca/cs200/');">CS 200</a></li> <li><a href="#" onclick="setContentTo('http://jcbServer.cs.uwaterloo.ca/cs230/');">CS 230</a></li> <li><a href="#" onclick="setContentTo('http://jcbServer.cs.uwaterloo.ca/cs436/');">CS 436</a></li> <li><a href="#" onclick="setContentTo('http://jcbServer.cs.uwaterloo.ca/');">jcbServer</a></li> <li><a href="#" onclick="setContentTo('http://www.math.uwaterloo.ca/');">Math</a></li> <li><a href="#" onclick="setContentTo('http://oscar.cs.uwaterloo.ca/');">Oscar</a></li> <li><a href="#" onclick="setContentTo('http://www.uwaterloo.ca/');">UofW</a></li> </ul> <h3>a href="#" onclick="toggleMenu('menuTwo');">Other sites (+/-)</a></h3> <ul id="menuTwo"> <li><a href="#" onclick="setContentTo('http://daringfireball.net/');">Daring Fireball</a></li> <li><a href="#" onclick="setContentTo('http://www.macgeekery.com/');">Mac Geekery</a></li> <li><a href="#" onclick="setContentTo('http://arstechnica.com/index.ars');">Ars Technica</a></li> <li><a href="#" onclick="setContentTo('http://www.slashdot.com/');">Slashdot</a></li> <li><a href="#" onclick="setContentTo('http://www.apple.com/');">Apple</a></li> <li><a href="#" onclick="setContentTo('http://www.microsoft.com/');">Microsoft</a></li> <li><a href="#" onclick="setContentTo('http://www.mozilla.org/');">Mozilla</a></li> </ul> <iframe id="content" src="" width="70%" height="100%"> Oops ... this browser doesn't implement the IFRAME tag. </iframe> </body> </html> Adapted from Section 6.13 of “The CSS Cookbook,” by Christopher Schmitt, O’Reilly & Associates. See also the documentation for CSS attributes in the “CSS Pocket Reference.” <script language="JavaScript"> var currentFontSize = 15; function changeFontSize( delta ) { var titleElement = document.getElementById("title").style; currentFontSize = currentFontSize + delta; titleElement.fontSize = currentFontSize + "pt"; } function toggleMenu( menuClicked ) { processOneMenu( "menuOne", menuClicked ); processOneMenu( "menuTwo", menuClicked ); } function processOneMenu( menuToCheck, menuClicked ) { var menuClickedStyles = document.getElementById(menuClicked).style; var menuToCheckStyles = document.getElementById(menuToCheck).style; if( menuClicked == menuToCheck ) { if( menuClickedStyles.display == "block" ) { menuClickedStyles.display = "none"; } else { menuClickedStyles.display = "block"; } } else { menuToCheckStyles.display = "none"; } } function setContentTo( theURL ) { var theDiv = document.getElementById("content"); theDiv.src = theURL; } </script> An example of “exotic selectors” in use <html> <head> <style> !--- td#title { color: black; } tr > td:first-child { color: blue; text-align: center; } tr > td + td + td { color: red; } tr > td + td + td + td { color: black; } --> </style> </head> <body> <table align='center' border=0 cellpadding=0 cellspacing=0 width=707> <tr> <td colspan=5 align=center id="title"> <span style="font-size: 20pt; font-weight: bold;"> ASCII Character Codes </span> <br> (http://jcbServer.cs.uwaterloo.ca/cs125/asciiCodes2.html) </td> </tr> <tr> <td colspan=5 align=center id="title"> <span style="font-size: 14pt; font-weight=normal;"> (http://jcbServer.cs.uwaterloo.ca/cs125/asciiCodes2.html) </span> </td> </tr> <tr> <td>Decimal Value</td> <td>a name=tableBits</td> <td>Char</td> <td>Abbr</td> <td>Meaning</td> </tr> <tr> <td><a name=table>Bits</a></td> <td>32</td> <td>0010 0000</td> </tr> <tr> <td>33</td> <td>0010 0001</td> </tr> <tr> <td>34</td> <td>0010 0010</td> </tr> <tr> <td>35</td> <td>0010 0011</td> </tr> <tr> <td>36</td> <td>0010 0100</td> </tr> <tr> <td>37</td> <td>0010 0101</td> </tr> </table> </body> </html>
{"Source-Url": "https://www.student.cs.uwaterloo.ca/~cs200/Lectures/1201/05_TheWeb/05_TheWeb_1up.pdf", "len_cl100k_base": 12822, "olmocr-version": "0.1.50", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 102335, "total-output-tokens": 16456, "length": "2e13", "weborganizer": {"__label__adult": 0.0007104873657226562, "__label__art_design": 0.00917816162109375, "__label__crime_law": 0.00040221214294433594, "__label__education_jobs": 0.0985107421875, "__label__entertainment": 0.0009131431579589844, "__label__fashion_beauty": 0.0005116462707519531, "__label__finance_business": 0.0009255409240722656, "__label__food_dining": 0.0005784034729003906, "__label__games": 0.0013551712036132812, "__label__hardware": 0.0013151168823242188, "__label__health": 0.0004606246948242187, "__label__history": 0.0010986328125, "__label__home_hobbies": 0.0005650520324707031, "__label__industrial": 0.0005354881286621094, "__label__literature": 0.00202178955078125, "__label__politics": 0.0003893375396728515, "__label__religion": 0.0008473396301269531, "__label__science_tech": 0.01666259765625, "__label__social_life": 0.0008721351623535156, "__label__software": 0.0950927734375, "__label__software_dev": 0.765625, "__label__sports_fitness": 0.0003211498260498047, "__label__transportation": 0.00043320655822753906, "__label__travel": 0.0005617141723632812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48570, 0.01848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48570, 0.46472]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48570, 0.75104]], "google_gemma-3-12b-it_contains_pii": [[0, 47, false], [47, 1008, null], [1008, 1730, null], [1730, 2058, null], [2058, 2208, null], [2208, 2284, null], [2284, 2441, null], [2441, 3319, null], [3319, 4293, null], [4293, 4982, null], [4982, 5138, null], [5138, 6175, null], [6175, 7036, null], [7036, 7809, null], [7809, 8552, null], [8552, 9312, null], [9312, 11702, null], [11702, 12249, null], [12249, 12370, null], [12370, 13740, null], [13740, 14333, null], [14333, 14679, null], [14679, 14913, null], [14913, 15808, null], [15808, 16798, null], [16798, 16943, null], [16943, 17600, null], [17600, 18762, null], [18762, 19582, null], [19582, 20436, null], [20436, 20820, null], [20820, 21690, null], [21690, 22073, null], [22073, 22426, null], [22426, 22959, null], [22959, 23592, null], [23592, 25235, null], [25235, 29036, null], [29036, 29192, null], [29192, 30022, null], [30022, 31771, null], [31771, 32615, null], [32615, 33897, null], [33897, 34192, null], [34192, 35054, null], [35054, 36032, null], [36032, 37330, null], [37330, 37955, null], [37955, 38830, null], [38830, 39511, null], [39511, 40013, null], [40013, 41206, null], [41206, 41658, null], [41658, 41914, null], [41914, 42852, null], [42852, 43296, null], [43296, 43661, null], [43661, 44250, null], [44250, 46375, null], [46375, 47370, null], [47370, 48570, null]], "google_gemma-3-12b-it_is_public_document": [[0, 47, true], [47, 1008, null], [1008, 1730, null], [1730, 2058, null], [2058, 2208, null], [2208, 2284, null], [2284, 2441, null], [2441, 3319, null], [3319, 4293, null], [4293, 4982, null], [4982, 5138, null], [5138, 6175, null], [6175, 7036, null], [7036, 7809, null], [7809, 8552, null], [8552, 9312, null], [9312, 11702, null], [11702, 12249, null], [12249, 12370, null], [12370, 13740, null], [13740, 14333, null], [14333, 14679, null], [14679, 14913, null], [14913, 15808, null], [15808, 16798, null], [16798, 16943, null], [16943, 17600, null], [17600, 18762, null], [18762, 19582, null], [19582, 20436, null], [20436, 20820, null], [20820, 21690, null], [21690, 22073, null], [22073, 22426, null], [22426, 22959, null], [22959, 23592, null], [23592, 25235, null], [25235, 29036, null], [29036, 29192, null], [29192, 30022, null], [30022, 31771, null], [31771, 32615, null], [32615, 33897, null], [33897, 34192, null], [34192, 35054, null], [35054, 36032, null], [36032, 37330, null], [37330, 37955, null], [37955, 38830, null], [38830, 39511, null], [39511, 40013, null], [40013, 41206, null], [41206, 41658, null], [41658, 41914, null], [41914, 42852, null], [42852, 43296, null], [43296, 43661, null], [43661, 44250, null], [44250, 46375, null], [46375, 47370, null], [47370, 48570, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48570, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48570, null]], "pdf_page_numbers": [[0, 47, 1], [47, 1008, 2], [1008, 1730, 3], [1730, 2058, 4], [2058, 2208, 5], [2208, 2284, 6], [2284, 2441, 7], [2441, 3319, 8], [3319, 4293, 9], [4293, 4982, 10], [4982, 5138, 11], [5138, 6175, 12], [6175, 7036, 13], [7036, 7809, 14], [7809, 8552, 15], [8552, 9312, 16], [9312, 11702, 17], [11702, 12249, 18], [12249, 12370, 19], [12370, 13740, 20], [13740, 14333, 21], [14333, 14679, 22], [14679, 14913, 23], [14913, 15808, 24], [15808, 16798, 25], [16798, 16943, 26], [16943, 17600, 27], [17600, 18762, 28], [18762, 19582, 29], [19582, 20436, 30], [20436, 20820, 31], [20820, 21690, 32], [21690, 22073, 33], [22073, 22426, 34], [22426, 22959, 35], [22959, 23592, 36], [23592, 25235, 37], [25235, 29036, 38], [29036, 29192, 39], [29192, 30022, 40], [30022, 31771, 41], [31771, 32615, 42], [32615, 33897, 43], [33897, 34192, 44], [34192, 35054, 45], [35054, 36032, 46], [36032, 37330, 47], [37330, 37955, 48], [37955, 38830, 49], [38830, 39511, 50], [39511, 40013, 51], [40013, 41206, 52], [41206, 41658, 53], [41658, 41914, 54], [41914, 42852, 55], [42852, 43296, 56], [43296, 43661, 57], [43661, 44250, 58], [44250, 46375, 59], [46375, 47370, 60], [47370, 48570, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48570, 0.01115]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
c8a9ad8de422eb573a15b04bd1d4cf199bf5600f
User Manual for KRUSADER Ken’s Rather Useless Symbolic Assembly Development Environment for the Replica 1 or is that Reasonably Useful? You decide! Ken Wessen ken.wessen@gmail.com Version 1.3 – December 24, 2007 1 Introduction KRUSADER is a program written to allow assembly language development on the Replica 1 – an Apple 1 clone designed by Vince Briel\(^1\), and described in the book *Apple 1 Replica Creation: Back to the Garage* by Tom Owad\(^2\). KRUSADER includes a simple shell and editor, a single-pass symbolic assembler, a disassembler, and a simple interactive debugger, and fits in just under 4K (so it is small enough to fit in the 8K of Replica 1 ROM along with the monitor and Apple BASIC). Although designed for the Replica 1/Apple 1, there is very little system dependent code, and since full source code is provided, KRUSADER can easily be adapted to any other 6502 based system. However, it’s limitations may mean it is not an appropriate tool in many cases (for example, it has no concept of a file-system and so would not be particularly suitable for use on an Apple II). KRUSADER handles a fairly standard and expressive syntax for its assembly source code. For users who are unfamiliar with the 6502 instruction set, I recommend the introduction by Andrew John Jacobs at [http://www.obelisk.demon.co.uk/6502/](http://www.obelisk.demon.co.uk/6502/). On a Replica 1, KRUSADER can assemble over 200 lines of code per second, and given its 32K of RAM, the defaults provide space for around 20K of tokenised source code, 7K of generated code, and up to 256 global symbols. The KRUSADER distribution is comprised of two source files, one that can assemble and disassemble 6502 code only, and the other that is able to handle the expanded 65C02 instruction set (see section 6). Both versions include a mini-monitor for interactive debugging (see section 7). In addition, two binaries of each version are supplied – one to be loaded in high RAM at addresses $7100$–$7FFF$, and the other that belongs in ROM from $F000$–$FEFF$. Since source is provided, alternative binaries are easy to produce. I use the 6502 simulator by Michal Kowalski\(^3\) to assemble \(^1\)[http://www.brielcomputers.com] \(^2\)[http://www.applefritter.com/replica] \(^3\)[http://home.pacbell.net/michal_k/] the object code, and test it on the Pom1 simulator\(^4\) and my Replica 1. Although the latest version of KRUSADER supports the 65C02, it contains only 6502 code itself, and this manual will cover the 6502 features first, saving a discussion of the 65C02 enhancements until section 6. Addresses quoted in this manual will be for the high RAM version of the code, with the ROM version values following in brackets, but these values are easily offset for any particular starting address. ## 2 Sample Session The best way to give a quick overview of the system and its operation is to work through a couple of simple examples. First thing is to load the program, and once loaded run it by typing 7100R(F000R). At this point you will be presented with a brief message showing the version of the assembler you are running, followed by the shell prompt \(?\). Type \(N\) to enter a new program, and enter the code shown below. The column layout is important, with the source organised in fields. After the current line number is printed by the editor, the next 6 characters are the label field, then after a space there is the 3 character mnemonic field, then after a space a 14 character arguments field, and finally a comments field of maximum 10 characters. Hitting tab or space will automatically progress you to the next field, and to finish entering code, hit the escape key (hit return first though, or you will lose the current line). If you make an error typing a line, hitting backspace will return you to the start of the line (there is no true backspace on the Apple 1, and I have chosen not to implement the underscore-as-backspace hack used in the Apple 1 monitor and Apple 1 BASIC since it confuses the syntactically important column layout). If you only notice an error after hitting return and need to change a line, type \(E\ nnn\), where \(nnn\) is the line number in question (it is not necessary to enter any leading zeroes). If you missed a line out altogether, type \(I\ nnn\) to insert at line \(nnn\). \[ \begin{align*} ? & N \\ 000 & LDA \#'A' \\ 001 & LOOP JSR \$FFEF \\ 002 & CLC \\ 003 & ADC \$1 \\ 004 & CMP \#'Z'+1 \\ 005 & BNE LOOP \\ 006 & RTS \\ 007<\text{esc}> \end{align*} \] When you have finished entering the source, type \(L\) to list the code, and then \(A\) to assemble it. You should see the assembler output 0300–030C, indicating the target memory locations used by the assembled code. Any errors detected in the code will be displayed at this point, and can be fixed using either the \(I\) and \(E\) commands described above, or the \(X\ nnn\ mmm\) command for deleting a range of lines (the second argument \(mmm\) is optional). Once the code has been successfully assembled, run it by typing \(R\ \$300\). The program will run and output the string ABCDEFGHIJKLMNOPQRSTUVWXYZ \(^4\)The original is at [http://www.chez.com/apple1/Apple1project/](http://www.chez.com/apple1/Apple1project/), and a version that fixes various bugs at [http://school.anhb.uwa.edu.au/personalpages/kwessen/apple1](http://school.anhb.uwa.edu.au/personalpages/kwessen/apple1). This version also adds the capability to emulate a 65C02-based Replica 1. and then return to the shell (because of the final RTS). In order to illustrate some more advanced features of the assembler, a second, more complicated example is the following. ``` 000 ECHO .= $FFEF 001 START .= 'A' 002 END .= 'Z' 003 STEP .= $30 004 005 SETUP .M $280 006 LDA #$1 007 STA STEP 008 LDA #START 009 RTS 00A 00B FWD .M $300 00C .LOOP JSR ECHO 00D CLC 00E ADC STEP 00F CMP #END 010 BMI .LOOP 011 RTS 012 013 BACK .M $320 014 .LOOP JSR ECHO 015 SEC 016 SBC STEP 017 CMP #START 018 BPL .LOOP 019 RTS 01A 01B MAIN .M $340 01C JSR SETUP 01D JSR FWD 01E JSR BACK 01F RTS 020<esc> ``` Again, type L to list the code. Typing A will assemble the code — this time made up of 4 modules, each with their own starting address. The assembler will output the following upon successful assembly: ``` ? A 0300–02FF SETUP 005 0280–0286 FWD 00B 0300–030A BACK 013 0320–032A MAIN 01B 0340–0349 ?``` This output shows the first source line number and the memory locations used by each module in the source code (the first line can be ignored because no code is generated prior to the declaration of the SETUP module). Hitting M will show the memory taken up by the source code, in this case 2000–20E5 (2300–23E5), and the value of global symbols can be queried by using the V command – e.g. typing V MAIN will get the response 0340. This is important, because it is the entry point for this program, and running it by typing either R $340 or R MAIN will result in the output: ABCDEFHIJKLMNOPQRSTUVWXYZWVUTSRQONML KJHGFEDCBA Some other relevant features of this second example are the use of blank lines for layout, and symbols to represent both constants (e.g. START .= 'A') and memory locations (e.g. STEP .= $30). Also note the use of the local label .LOOP in both the FWD and BACK modules. Local labels are indicated by an initial '.', and have module level scope only. The .= and .M commands are two directives recognised by the assembler for defining symbols and modules respectively. It is not necessary to give a memory location argument to the .M directive, and indeed only in very special circumstances would you wish to do so. 3 Shell Commands The previous section introduced most of the available shell commands, in the context of a sample interactive session. Shell commands are entered in response to the ? prompt, and are all single key commands, with between zero and two arguments. If the shell cannot process an entered command, ERR: SYN is output to indicate a syntax error. Five of the thirteen shell commands are for source editing, and the others are for assembling, running, disassembling, querying symbols and source memory, using the monitor, and recovery. Table 1 gives a summary of all commands and their syntax. 4 Source Code As described in section 2, source code in KRUSADER requires strict adherence to a specific column-based format. The editor both assists and enforces source code entry to match this format by auto-forwarding on a space and ignoring invalid keypresses. In addition, any non-blank line must have either a valid mnemonic or directive, or start with the comment character (;). The sections below describe the source format and the legal entries for each field in detail. 4.1 Labels Labels are up to 6 alphanumeric characters in length, and may be either global or local in scope. Local labels are defined for the current module only, whereas global labels are accessible anywhere in the program. Up to 256 global labels may be defined, and up to 32 local labels in any one module. Local labels are any labels that begin with a . (i.e. a period). Labels may be used prior to being defined, i.e. as forward references, and up to 85 forward references are allowed. However, forward references are more limited than normal labels since they are always treated as words (i.e. 2 bytes in size), and any expression involving them must also result in a 2 byte value. In particular this means forward references cannot be used with the < and > operators (see section 4.5). <table> <thead> <tr> <th>Command</th> <th>Arguments</th> <th>Action</th> </tr> </thead> <tbody> <tr> <td>N</td> <td></td> <td>Start a new program. This command will clear any existing source, and prompt for source entry from line 000.</td> </tr> <tr> <td>I</td> <td>&lt;nnn&gt;</td> <td>Insert code from line nnn, or at the end if no argument. If k lines are inserted, existing lines from nnn are shifted down by k.</td> </tr> <tr> <td>L</td> <td>&lt;nnn&gt;</td> <td>List the source starting from line nnn, or the beginning if no argument. Press any key to stop the listing.</td> </tr> <tr> <td>X</td> <td>nnn &lt;mmm&gt;</td> <td>Delete from lines nnn to mmm inclusive. If just one argument, delete line nnn only. Care must be taken since this will change the line number associated with all subsequent source lines. Delete cannot be undone.</td> </tr> <tr> <td>E</td> <td>nnn</td> <td>Edit line nnn, and insert after. This is equivalent to typing X nnn followed by I nnn, so the existing line is deleted immediately, and as for the X command, it cannot be recovered.</td> </tr> <tr> <td>M</td> <td></td> <td>Show the memory range used to store the current source code.</td> </tr> <tr> <td>A</td> <td>LABEL</td> <td>Assemble the current source code.</td> </tr> <tr> <td>V</td> <td>$nnnn or LABEL</td> <td>Show the value of the given label or expression.</td> </tr> <tr> <td>R</td> <td>$nnnn or LABEL</td> <td>Run from address $nnnn. If the program ends with an RTS, control is returned to the shell. Otherwise, re-enter the shell at address $711C($F01C).</td> </tr> <tr> <td>D</td> <td>&lt;$nnnn or LABEL&gt;</td> <td>Disassemble from address $nnnn, or continue from the last address if no argument. Press any key to stop the disassembly.</td> </tr> <tr> <td>!</td> <td></td> <td>Send the next line typed as a command to the Apple 1 Monitor.</td> </tr> <tr> <td>$</td> <td>&lt;c&gt;</td> <td>Drop into the Apple 1 Monitor. You can re-enter the shell at address $711C($F01C).</td> </tr> <tr> <td>P</td> <td></td> <td>Panic! This command attempts to recover lost source (usually due to zero page data corruption). If the first line of source starts with a label, then give the first letter of that label as an argument to this command. For more detail, see the section on source tokenisation 8.2</td> </tr> </tbody> </table> Table 1: KRUSADER shell commands. Note that any shell commands that use labels are dependent on the assembler’s global symbol table being intact, specifically the pointer information in zero page locations $E9, $EA and $EB, and the table data itself (see figure 1). Optional arguments are indicated by < · · · >. Once any particular module has been assembled, all local labels are cleared and an error is reported if any forward references involving local labels remain unresolved. However, any forward references to global labels that remain unresolved are simply held, and will only cause an error if they are still unresolved once assembly of the entire program has been completed. **KRUSADER** will report an error if a global label is redefined, or a local label is redefined within a module. ### 4.2 Mnemonics **KRUSADER** recognises the standard 3 character mnemonics for all legal 6502 instructions. These are shown in table 2. Undocumented opcodes are not supported, and the 65C02 support is discussed in section 6. The editor will not accept any line with an invalid entry in the mnemonic field. Note that when the 6502 executes a **BRK** instruction, the return address pushed onto the stack is PC+2, and so **KRUSADER** assembles the **BRK** opcode as two $00 bytes and an **RTI** will return to the next instruction. However, the disassembler will show these as consecutive **BRK** instructions. <table> <thead> <tr> <th>Operation</th> <th>Mnemonics</th> </tr> </thead> <tbody> <tr> <td>Load/Store</td> <td>LDA, LDX, LDY, STA, STX, STY</td> </tr> <tr> <td>Transfer</td> <td>TAX, TXA, TAY, TYA, TSX, TXS</td> </tr> <tr> <td>Stack</td> <td>PHA, PLA, PHP, PLP</td> </tr> <tr> <td>Logical</td> <td>AND, EOR, ORA, BIT</td> </tr> <tr> <td>Arithmetic</td> <td>ADC, SBC, CMP, CPX, CPY</td> </tr> <tr> <td>Increment/Decrement</td> <td>INC, INX, INY, DEC, DEX, DEY</td> </tr> <tr> <td>Shift</td> <td>ASL, LSR, ROL, ROR</td> </tr> <tr> <td>Jump/Call</td> <td>JMP, JSR, RTS</td> </tr> <tr> <td>Branch</td> <td>BCC, BCS, BEQ, BNE, BMI, BPL, BVC, BVS</td> </tr> <tr> <td>Status Flag</td> <td>CLC, CLD, CLI, CLV, SEC, SED, SEI</td> </tr> <tr> <td>Other</td> <td>BRK, NOP, RTI</td> </tr> </tbody> </table> Table 2: Recognised mnemonics. ### 4.3 Arguments Table 3 shows the argument format for each of the 6502’s addressing modes. In addition, $\text{n}n\text{nnnn}$ can always be replaced by a label or expression, and similarly $\text{n}n\text{nn}$ so long as the result is a single byte. (Expressions are introduced in section 4.5 below.) A single byte may also be represented using ‘c’ for a given printable character when immediate mode is being used. Whenever a word sized argument actually has a high byte of zero and the corresponding byte size addressing mode is legal, the byte size mode will be used. In addition, some mnemonics support the absolute,Y addressing mode but not the zero page,Y mode. In these cases, a byte sized argument will be increased to word size in order to make the instruction legal. Constants are always hexadecimal. ### Addressing Mode | Format | Addressing Mode | Format ---|---|---|--- Implicit | Absolute | Implicit | Absolute Accumulator | Absolute, X | Absolute | $nnnn, X Immediate | #\$nn or #'c' | Immediate | $nnnn, Y Zero Page | $nn | Zero Page, X | $nnn, X Zero Page, Y | $nnn, X | Indexed Indirect | ($nnn, X) Zero Page, Y | $nnn, Y | Indirect Indexed | ($nnn, Y Relative | *+/--nn Table 3: Source code syntax for the 13 addressing modes of the 6502. ### 4.4 Comments There are two ways to include comments in KRUSADER source. Full line comments may be entered by typing a ‘;’ character as the first character in the line, followed by a space\(^5\). Then all line space from the mnemonic field onwards can be used for comment text. Alternatively, the remainder of any line after the end of the argument field is also reserved for comments, and in this case, no special character is required to precede their entry. Comment entry is the only place where spaces are treated literally, and examples of both kinds of comments are shown in the code snippet below: ```assembly 003 ; HERE IS A LONG COMMENT 004 CMP #'Z'+1 HERE ARE 005 BNE LOOP SHORT ONES ``` ### 4.5 Expressions KRUSADER allows the use of 4 operators in a mnemonic’s argument: +, -, < and > for plus, minus, low byte and high byte respectively. The plus and minus operators take a constant signed byte argument only, and unlike other places where constants are employed, the argument requires no preceding $\$. The high and low byte operators are used to extract the relevant single byte from a word sized constant or label, and have lower precedence than + and -, and so are applied last of all when evaluating the expression. For example, if we define the symbols \texttt{BYTE} .= $12$ and \texttt{WORD} .= $3456$, then the following expressions are evaluated as listed below: - \texttt{BYTE+1} = $13$, - \texttt{BYTE+80} = $FF92$, - \texttt{<BYTE+80} = $92$, - \texttt{WORD+1} = $3457$, - \texttt{WORD-1} = $3455$, - \texttt{WORD+FF} = $3455$, - \texttt{>WORD} = $34$, - \texttt{<WORD} = $56$, - \texttt{>WORD+10} = $34$, - \texttt{<WORD+10} = $66$. As described in section 4.1, forward references can be used in expressions involving the + and - operators, but not in expressions involving the < and > operators. \(^{5}\)Strictly speaking the space is not required, but if it is absent, the source formatting will be upset. 4.6 Directives In addition to the 6502 mnemonics described in section 4.2, KRUSADER supports a number of directives for managing symbolic constants, program structure and data. Directives are entered in the mnemonic field, and always consist of a period followed by a single letter. Each of the available directives is described in table 4 below. <table> <thead> <tr> <th>Directive</th> <th>Action</th> </tr> </thead> <tbody> <tr> <td>LABEL .= $nnnn</td> <td>Define a named constant. The label must be global in scope, and redefinitions are ignored without error. Expressions or a quoted character are allowed.</td> </tr> <tr> <td>LABEL .M &lt;$nnnn&gt;</td> <td>Define a new module, optionally at the specified address, or else just continuing on from the previous module. The label must be global in scope, and redefinitions are ignored without error. Expressions are not allowed.</td> </tr> <tr> <td>&lt;LABEL&gt; .B $nn</td> <td>Store the byte-sized value $nn at the current PC. Expressions or a quoted character are allowed, but not forward references.</td> </tr> <tr> <td>&lt;LABEL&gt; .W $nnnn</td> <td>Store the word-sized value $nnnn at the current PC in little-endian byte order. Expressions are allowed, but not forward references.</td> </tr> <tr> <td>&lt;LABEL&gt; .S 'cc...c'</td> <td>Store the string literal at the current PC. The string must be 13 characters or less, and may not contain spaces.</td> </tr> </tbody> </table> Table 4: Directives supported by KRUSADER. Optional fields are indicated by <⋯>. 5 Errors Error reporting in KRUSADER is necessarily limited by its size constraints, but nevertheless it attempts to capture as many errors and ambiguities as possible, and report them in a meaningful way. Errors can arise in response to a shell command or as a result of an assembly. When appropriate, the offending line or symbol will be displayed. <table> <thead> <tr> <th>Error</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>ERR: SYN</td> <td>Syntax error in either a shell command or a source code line.</td> </tr> <tr> <td>ERR: MNE</td> <td>An illegal mnemonic code was encountered.</td> </tr> <tr> <td>ERR: ADD</td> <td>An illegal addressing mode was encountered.</td> </tr> <tr> <td>ERR: SYM</td> <td>A needed symbol was not found.</td> </tr> <tr> <td>ERR: OVF</td> <td>Too many symbols has lead to a symbol table overflow.</td> </tr> </tbody> </table> Table 5: KRUSADER error messages. There are many reasons why an “ERR: ADD” may occur, especially if the offending line involves symbols. For this reason it can be helpful to query the symbols involved using the V command (see table 1). If the symbol is indeed the cause of the addressing mode error, the V command will report a more useful error, specifically “ERR: SYM” if the symbol is undefined, or “ERR: OVF” if the symbol tables are full. 6 65C02 Support With version 1.2 of KRUSADER, optional support for the 65C02 processor has been included. The 65C02 is an enhanced version of the basic 6502 chip, offering some extra operations and addressing modes, and fixing a few problems with the original design Although these enhancements are all useful, essentially the changes are a case of “too little, too late” and frequently programmers choose to stick to pure 6502 code for portability reasons. In addition, 65C02s from various manufacturers have slightly different command sets, thus adding to the confusion. However, since the 65C02 is the CPU in nearly all Replica 1 computers, it makes good sense for KRUSADER to support this chip, and this section describes this support. Nevertheless, no 65C02 specific operations are used in the KRUSADER code itself. 6.1 Additional Mnemonics The ten 65C02 instructions supported by KRUSADER are listed in table 6. Each of these is valid on all versions of the 65C02, and also on the 65C816. No other instructions are supported – specifically the single bit instructions BBR, BBS, RMB, SMB found on the Rockwell and WDC versions of the 65C02, and the STP and WAI instructions found on the Rockwell 65C02 only are not recognised. <table> <thead> <tr> <th>Operation</th> <th>Mnemonics</th> </tr> </thead> <tbody> <tr> <td>Load/Store</td> <td>STZ</td> </tr> <tr> <td>Stack</td> <td>PHX, PLX, PHY, PLY</td> </tr> <tr> <td>Logical</td> <td>TSB, TRB</td> </tr> <tr> <td>Increment/Decrement</td> <td>INA, DEA</td> </tr> <tr> <td>Branch</td> <td>BRA</td> </tr> </tbody> </table> Table 6: Additional mnemonics supported by the 65C02 version of KRUSADER. 6.2 Additional Addressing Modes The 65C02 introduced two new addressing modes – zero page indirect and absolute indexed indirect. The KRUSADER source code syntax for these modes is shown below: - Zero Page Indirect – ($nn) - Absolute Indexed Indirect – ($nnnn,X) 7 The Mini-Monitor The 8K of Replica 1 ROM, with Integer BASIC, the Monitor, and KRUSADER, leaves just enough space for a simple interactive debugger, and this section describes the debugger included with the ROM version of KRUSADER. By making the IRQBRK vector at $FFFE,F point to the DEBUG routine at address $FE19 ($FE0A for the 65C02 version), execution of a BRK passes control to the mini-monitor, and the registers and next instruction are displayed as follows: ``` A-41 X-FF Y-07 S-FD P-23 CZ 0306 20 EF FF JSR $FFEF ``` The P register is shown both as a value, and as a string from “NVBDIZC” indicating which flags are set. (In the above example, the zero flag (Z) and the carry flag (C) are set.) The - prompt indicates that the mini-monitor is waiting for a command. Valid commands are shown in table 7, and are used to change the register values, set the next instruction location, or drop into the Apple 1 monitor. <table> <thead> <tr> <th>Command</th> <th>Action</th> </tr> </thead> <tbody> <tr> <td>Ann</td> <td>Put the value $nn into the A register.</td> </tr> <tr> <td>Xnn</td> <td>Put the value $nn into the X register.</td> </tr> <tr> <td>Ynn</td> <td>Put the value $nn into the Y register.</td> </tr> <tr> <td>Snn</td> <td>Put the value $nn into the S register.</td> </tr> <tr> <td>Pnn</td> <td>Put the value $nn into the P register.</td> </tr> <tr> <td>Lnn</td> <td>Set the low byte of the PC for the next instruction to be executed to the value $nn.</td> </tr> <tr> <td>Hnn</td> <td>Set the high byte of the PC for the next instruction to be executed to the value $nn.</td> </tr> <tr> <td>R</td> <td>Resume execution at the currently displayed instruction.</td> </tr> <tr> <td>$</td> <td>Enter the Apple 1 monitor.</td> </tr> <tr> <td>!</td> <td>Enter an Apple 1 monitor command.</td> </tr> <tr> <td>T</td> <td>Trace execution step by step (6502 version only).</td> </tr> </tbody> </table> Table 7: Mini-monitor commands. To return to the mini-monitor from the Apple 1 monitor, type FE23R (or FEOAR if you are running the 65C02 version). Another useful address to remember is the disassembly routine at $FADO ($FAF8 for the 65C02 version), but bear in mind that using this routine involves changing the stored PC value at locations $F5 and $F6, and this must be restored, using either the monitor or the mini-monitor, if you wish to resume the program being monitored. 7.1 Sample Mini-Monitor Session This section gives a short example of a mini-monitor session. In order to work through this example, first start the assembler, and enter the following short program that includes a BRK instruction. Assemble it and examine the resulting code. ? A <table> <thead> <tr> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> </tr> </thead> <tbody> <tr> <td>0300</td> <td>A9 41</td> <td>LDA #$41</td> <td>0302</td> <td>20 EF FF</td> <td>JSR $FFEF</td> <td>0305</td> <td>38 SEC</td> <td></td> <td>0306</td> <td>00 BRK</td> <td></td> <td>0307</td> <td>00 BRK</td> <td></td> <td>0308</td> <td>20 EF FF</td> <td>JSR $FFEF</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> When this program is run, it should print A, set the carry flag (C), and then drop into the mini-monitor. ? R $300 A <table> <thead> <tr> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> </tr> </thead> <tbody> <tr> <td>0308</td> <td>20 EF FF</td> <td>JSR $FFEF</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Use the A command to change the value in register A from 41 to 42, confirm the change by checking the new register display, and then resume the program with the R command. The program will then print a B, corresponding to the new value in register A, and return to the assembler shell as normal. -A42 <table> <thead> <tr> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> <th>Address</th> <th>Opcode</th> <th>Mnemonic</th> </tr> </thead> <tbody> <tr> <td>0308</td> <td>20 EF FF</td> <td>JSR $FFEF</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Now let’s enter a new program, this time involving a subroutine call. ? N 000 LDA #’A’ 001 JSR SUB 002 JSR $FFEF 003 RTS 004 005 SUB .M 006 BRK 007 LDA #’Z’ 008 RTS 009 ? Assemble it, examine the resulting code, and run. It will break into the mini-monitor on entry to the subroutine SUB. ? A 0300 SUB 005 0309-030C ? D $300 0300 A9 41 LDA #$41 0302 20 09 03 JSR $0309 0305 20 EF FF JSR $FFEF 0308 60 RTS 0309 00 BRK 030A 00 BRK 030B A9 5A LDA #$5A 030D 60 RTS ... ? R $300 A-41 X-FF Y-07 S-FB P-24 B 030B A9 5A LDA #$5A - When in the mini-monitor, unroll the stack by adjusting the value in S, and set the PC to $0305. -SFD A-41 X-FF Y-07 S-FD P-24 B 030B A9 5A LDA #$5A -L05 A-41 X-FF Y-07 S-FD P-20 B 0305 20 EF FF JSR $FFEF - Now resume, and A will be printed rather than Z since lines 007 and 008 were never executed. Since we also adjusted the stack pointer, the RTS on line 003 will return control to the assembler. -R A ? 7.2 Tracing Assembled Code The 6502 version of KRUSADER left just enough space free in the Replica 1 ROM for implementing a single step trace function in the mini-monitor. In the absence of size constraints, this function can be added to the 65C02 version as well, but it would need a little bit of extra code to properly handle the BRA instruction and the absolute indirect addressing mode. To see how the trace function operates, enter the following short program: ``` ? N 000 BRK 001 PHP 002 SEC 003 LDX #$0 004 DEX 005 PLP 006 RTS 007 ? ``` Assemble and run, and it will drop into the monitor right away. ``` ? A 0300-0308 ? R $300 ``` ``` A-0D X-01 Y-07 S-FD P-30 B 0302 08 PHP ``` Now we can use the T command to step through the code. After one step, the current value of the status register (P = 30) will be pushed onto the stack, and the stack pointer will decrease by 1. ``` -T A-0D X-01 Y-07 S-FC P-30 B 0303 38 SEC ``` As we step through, we can watch the changes in the status flags in response to the execution of each command. Follow the below to see the carry, zero and negative flags being set. ``` -T A-0D X-01 Y-07 S-FC P-31 BC 0304 A2 00 LDX #$00 -T A-00 X-01 Y-07 S-FC P-33 BZC 0306 CA DEX -T A-FF X-01 Y-07 S-FC P-B1 NBC 0307 28 PLP ``` After one last step, the status register will be restored to its earlier value, so the flags will return to off, and the stack pointer increased by 1. Then, as usual, the R command will continue execution of the program, and so the RTS command will return control to the assembler shell. ``` -T A-FF X-01 Y-07 S-FD P-30 B 0309 60 RTS -R ? ``` ### 7.3 Using the Apple 1 monitor The minimonitor includes no commands for examining or altering memory. Rather, this functionality is provided via the standard Apple 1 monitor by using the ! command to submit instructions to the monitor as shown in the following example. ``` A-D2 X-00 Y-01 S-FD P-33 BZC 0002 00 BRK -! 1000.101F 1000: 00 00 00 00 00 00 00 00 1008: 00 00 00 00 00 00 00 00 1010: 00 00 00 00 00 00 00 00 1018: 00 00 00 00 00 00 00 00 A-D2 X-00 Y-01 S-FD P-33 BZC 0002 00 BRK -! 1000: 1 2 3 4 1000: 00 A-D2 X-00 Y-01 S-FD P-33 BZC 0002 00 BRK -! 1000.1007 1000: 01 02 03 04 00 00 00 00 A-D2 X-00 Y-01 S-FD P-33 BZC 0002 00 BRK -` ``` 8 Low Level Information This section presents some low-level information about how KRUSADER works, and is not required for normal use of the assembler. However, there are many situations where this information is quite important for managing source and machine code, and correcting errors. The most important memory locations are shown in table 8, and discussed in the following sections. <table> <thead> <tr> <th>Address</th> <th>Function</th> <th>Zero Page Dependencies</th> </tr> </thead> <tbody> <tr> <td>$7100($F000)</td> <td>Assembler entry</td> <td>$F8 – High byte of assembled code storage area</td> </tr> <tr> <td>$711C($F01C)</td> <td>Assembler re-entry (shell)</td> <td>$F9 – High byte of local/global table boundary</td> </tr> <tr> <td></td> <td></td> <td>$FE,$FF – Address of source code storage area</td> </tr> <tr> <td></td> <td></td> <td>$E9,$EA – Global symbol table address</td> </tr> <tr> <td></td> <td></td> <td>$EB – Number of global symbols</td> </tr> <tr> <td>$7304($F204)</td> <td>Move memory (non-overlapping)</td> <td>$50,$51 – Source location</td> </tr> <tr> <td></td> <td></td> <td>$52,$53 – Destination</td> </tr> <tr> <td></td> <td></td> <td>$54,$55 – Bytes to move</td> </tr> <tr> <td>$FE14*</td> <td>Mini-monitor entry</td> <td>$F0–$F4 – Register storage (S,P,Y,X and A)</td> </tr> <tr> <td>$FE1E*</td> <td>Mini-monitor re-entry (ROM version only)</td> <td>$F5,$F6 – Address of next instruction</td> </tr> <tr> <td></td> <td></td> <td>$0F–$11 – Input buffer</td> </tr> <tr> <td></td> <td></td> <td>$E0–$E8 – Code buffer for trace function</td> </tr> <tr> <td>$7BCA($FACA)*</td> <td>Disassembler</td> <td>$F5,$F6 – Address to disassemble from</td> </tr> </tbody> </table> Table 8: Important function entry points and related memory locations. *The minimonitor addresses are $FE03 and $FE16 in the 65C02 version, and the disassembler address is $FAEA. 8.1 Memory Map Proper operation of the assembler requires a number of things to reside in the machine’s memory. There is the assembler code itself, the program source code, the assembled code, and various symbol tables. The default arrangement for both the high RAM and the ROM versions of KRUSADER are shown in figure 1. The local symbol and forward reference tables take up a fixed 1K of space, with 256 bytes taken up by the locals, and the remainder for the forward references. The global symbol table grows downward to a maximum of 2K (corresponding to 256 symbols). The two most important source locations have both been mentioned already, and are $7100($F000) for initial program entry, and $711C($F01C) for returning to the shell. KRUSADER also uses a number of zero page locations, but mostly as an input buffer and during assembly. The only locations that absolutely must be maintained are $F8, $F9, $FE and $FF. These hold the high byte of the default assembled code storage location, the high byte of the local/global symbol table boundary, and the low and high bytes of the source code storage location respectively (See figure 1 for the appropriate values). Additionally, locations $E9, $EA and $EB are needed if the shell is to have access to the global symbol table for various commands (see table 1), and locations $0F, $10, $11, $E0 to $E8, and $F0 to $F6 are used by the mini-monitor. If you wish to use all the features of KRUSADER while developing assembly language programs, it is wise to avoid using these locations in your programs. Replica 1 users need to beware of Apple 1 BASIC. Figure 1: Memory map for both the high RAM and the ROM versions of KRUSADER. Note that the global symbol table starts at the local/global table boundary and grows downwards, whereas the program source starts at the low address and grows upwards. As mentioned in section 8.1, the important values are the high byte of the target memory, the high byte of the local/global symbol table boundary, and the start of the source code storage. For the RAM version, the values are $03, $6D, $00 and $1D, and for the ROM version they are $03, $7C, $00 and $20. After initialisation, these values are stored in zero page locations $F8, $F9, $FE and $FF. because it may overwrite several of these values, so they need to be restored before returning to the KRUSADER shell⁶. ### 8.1.1 Changing Default Memory Locations For the RAM based version of KRUSADER, the default values can be altered by changing the values at memory locations $7101, $7105 and $7109 in the assembler code. For the ROM based version, the default values can only be altered after the program has been run, and the alternative values must be entered directly into the zero page locations mentioned above, before resuming KRUSADER at location $F01C⁷. --- ⁶The P command is useful for restoring the default values to these zero page memory locations. ⁷The P command will overwrite any values entered in this way. 8.2 Source Tokenisation Scheme Any entered source is stored in a tokenised form in order to save space. The tokenisation employed is quite simple because there was even less space available in the code to implement it! Three special codes are employed – $01 as a field terminator, $00 as a line terminator, and $02 00 to indicate a blank line. Labels, arguments and comments are stored as entered with the field terminator marking their end, and mnemonics and directives are encoded as a single byte. Program end is indicated by a line starting with $00. This simple scheme results in a reduction of the source code size by a factor of 2 to 3. Also provided in the KRUSADER distribution is C source code for a program that can convert more general source code formats to the required tokenised form, so that they may be uploaded to the assembler. However, this simple program does not translate different formats for directives or addressing mode specification, so if any such changes are required, they must be done manually. Once you have the converted source data, simply launch KRUSADER as normal, enter the monitor and load in the tokenised data to the source storage memory, and resume. The source will then be available to KRUSADER as if you had typed it in as normal. However, syntax and formatting errors in the source may not be well handled since the error handling is designed around the restrictions placed on source input in the usual manner. Unfortunately the binary source tokenisation is incompatible between the 6502 and 65C02 versions of the assembler due to differences in the mnemonic indices and directive encoding. The ‘-C’ command line option may be used with the tokeniser program to specify the 65C02 version binary format. One useful thing to keep in mind is the N command simply clears the source program from memory by putting a $00 as the very first byte of the source storage location. So if N is typed accidentally, the source can be easily restored. To do this manually, simply enter the monitor using $, and change this initial byte to either $01 if there was no label, or the first byte of the label, and then resume KRUSADER as if you had typed it in as normal. However, syntax and formatting errors in the source may not be well handled since the error handling is designed around the restrictions placed on source input in the usual manner. 8.3 Moving Memory If you want to copy a block of memory to another location (if you are backing up source or machine code via the expansion interface for example), you can make use of one of the memory moving subroutines inside the assembler as follows. First work out non-overlapping source and destination addresses – call these srcL, srcH, destL and destH, and then the size of memory to move – sizeL and sizeH. (You can do this using the M command for source or watching the output of the A command for assembled code.) Then drop into the monitor using the $ command, and enter: `50: srcL srcH destL destH sizeL sizeH` `7304R (F204R)` to do the move. 8.4 Testing KRUSADER comes with a number of sample programs, both in binary and hex dump formats, and these are useful for verifying its correct operation. The most important sample in this regard is TestSuite.asm, which contains a number of modules for testing the assembler. (For the 65C02 version, there is the pair of source files: TestSuiteC.asm for the pure 6502 tests, and TestSuite65C02.asm for tests of the 65C02 extensions.) The first test of course is that the source assembles without error, then **R MAIN** will cause the program to verify its own assembled code and report any errors. This test suite covers all the 6502 instructions, using all their addressing modes and various formats for arguments where relevant, as well as each of the available directives. In addition, both forward referencing and the various kinds of expressions supported by **KRUSADER** are tested – both alone and in combination with various addressing modes. A successful run of this suite of tests is a very good indication that **KRUSADER** is functioning properly. Because of the size of the test suite code, it can not be run with the RAM version of **KRUSADER** without changing the start address for program source storage as described in section 8.1.1 above in order to allow enough room for the source code, the assembled code, and the global symbol table. Specifically, prior to running **KRUSADER**, the value at address $7101 needs to be changed to $14, or if **KRUSADER** is already running, this value of $14 can be loaded directly into the zero page address $FF. 9 Release History • **KRUSADER 1.0** (May 2006) – First version released. – Target CPU is 6502 only. • **KRUSADER 1.1** (July 2006) – Fully tested version - several bug fixes over version 1.0. – Comes with code for a thorough automated testing of itself. – Various enhancements over version 1, most notably: * Forward references can cross module boundaries. * Symbol redefinition is reported as an error. * Comment only lines. * Approximately 20% faster assembly. – This version was added to the Replica 1 SE ROM. • **KRUSADER 1.2** (August 2006) – Fixes an obscure bug in version 1.1 that prevented assembly of JMP or JSR commands targeting page zero. – Improved syntax checking. – More compact implementation saves more than 200 bytes over version 1.1. – Extra space used to implement a mini-monitor for interactive debugging. – Also available is **KRUSADER 65C02**, a version that supports 65C02 commands in both the assembler and disassembler. – Still fits inside the 3840 free bytes of the Replica 1 ROM! • **KRUSADER 1.3** (December 2007) – Made source and disassembly listings continuous until key pressed. – Added ‘!’ command to the shell and minimonitor. – Bundled with the *Krusader Toolkit*\(^8\) – an integrated editor, terminal and emulator for the Replica 1. \(^8\)http://school.anhb.uwa.edu.au/personalpages/kwessen/apple1/ktk.zip
{"Source-Url": "http://school.anhb.uwa.edu.au:80/personalpages/kwessen/apple1/krusader13.pdf", "len_cl100k_base": 11186, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 43892, "total-output-tokens": 11777, "length": "2e13", "weborganizer": {"__label__adult": 0.00027680397033691406, "__label__art_design": 0.0003485679626464844, "__label__crime_law": 0.00017154216766357422, "__label__education_jobs": 0.00033926963806152344, "__label__entertainment": 5.3763389587402344e-05, "__label__fashion_beauty": 0.0001277923583984375, "__label__finance_business": 0.00011217594146728516, "__label__food_dining": 0.00022709369659423828, "__label__games": 0.0005855560302734375, "__label__hardware": 0.006801605224609375, "__label__health": 0.0001436471939086914, "__label__history": 0.00015294551849365234, "__label__home_hobbies": 0.00014221668243408203, "__label__industrial": 0.0006823539733886719, "__label__literature": 0.00011861324310302734, "__label__politics": 9.882450103759766e-05, "__label__religion": 0.0004112720489501953, "__label__science_tech": 0.01300811767578125, "__label__social_life": 4.690885543823242e-05, "__label__software": 0.018096923828125, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.0002188682556152344, "__label__transportation": 0.00029349327087402344, "__label__travel": 0.00010532140731811523}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39849, 0.12768]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39849, 0.32957]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39849, 0.85638]], "google_gemma-3-12b-it_contains_pii": [[0, 2307, false], [2307, 5480, null], [5480, 6444, null], [6444, 9565, null], [9565, 11874, null], [11874, 14431, null], [14431, 16826, null], [16826, 18898, null], [18898, 21147, null], [21147, 23522, null], [23522, 25448, null], [25448, 26354, null], [26354, 27656, null], [27656, 28656, null], [28656, 32398, null], [32398, 33774, null], [33774, 37298, null], [37298, 38390, null], [38390, 39849, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2307, true], [2307, 5480, null], [5480, 6444, null], [6444, 9565, null], [9565, 11874, null], [11874, 14431, null], [14431, 16826, null], [16826, 18898, null], [18898, 21147, null], [21147, 23522, null], [23522, 25448, null], [25448, 26354, null], [26354, 27656, null], [27656, 28656, null], [28656, 32398, null], [32398, 33774, null], [33774, 37298, null], [37298, 38390, null], [38390, 39849, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39849, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39849, null]], "pdf_page_numbers": [[0, 2307, 1], [2307, 5480, 2], [5480, 6444, 3], [6444, 9565, 4], [9565, 11874, 5], [11874, 14431, 6], [14431, 16826, 7], [16826, 18898, 8], [18898, 21147, 9], [21147, 23522, 10], [23522, 25448, 11], [25448, 26354, 12], [26354, 27656, 13], [27656, 28656, 14], [28656, 32398, 15], [32398, 33774, 16], [33774, 37298, 17], [37298, 38390, 18], [38390, 39849, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39849, 0.20423]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
4bfe55920c330d673a92158f268a44eec54b9009
CODE GENERATION BASED ON CONTROLLED NATURAL LANGUAGE INPUT Howard Dittmer and Xiaoping Jia Jarvis College of Computing and Digital Media, DePaul University, Chicago, IL ABSTRACT Over time the level of abstraction embodied in programming languages has continued to grow. However, most programming languages still require programmers to conform to rigid constructs. These constructs have been implemented in the name of efficiency for the computer. The continual increase in computing power allows us to consider techniques not so limited. To this end, we have created CABERNET, a Controlled Natural Language (CNL) based approach to program creation. CABERNET allows programmers to use an outline-based syntax. Using heuristics and inference to analyze and determine the programmer’s intent, this tool chain can create mobile applications. Using templates, a CABERNET application can be processed to run on multiple run-time environments. Since processing a CABERNET program file results in a native application, performance is maintained. In this paper, we compared sample applications created in Swift, SwiftUI, and CABERNET. The CABERNET implementations were consistently shorter than those produced in the other two languages. In addition, users surveyed consistently found CABERNET easier to understand. KEYWORDS Controlled Natural Language, Literate Programming, Programming Language, Computer-aided Software. 1. INTRODUCTION Computer programming provides tools for improving the productivity of human users. The tools that are embodied in computer programs have improved the productivity of all types of users. However, one area that can still benefit from computer-based automation is program development. While many tools automate specific tasks performed by a programmer, there is a lack of consistent automation directed at the actual process of creating instructions that embody the program. Computer-aided Software Engineering (CASE) tools have existed since the late 1960’s. These approaches include everything from requirements capture to the evaluation of code for potential errors. Unfortunately, most of these tools have been applied piecemeal to the problem of program development. By starting with code generation based on inference and a controlled natural language, we see an opportunity to address the programmer’s core function, actual code generation. Much has been made of using Artificial Intelligence (AI) to replace human efforts. In the field of program development, there is an expectation that AI could generate computer programs based on the input of requirements. Some recent work has involved the use of Machine Learning to generate code. This effort in effect seeks to replace the human programmer’s efforts. Alternately, we see our efforts as not an attempt to replace the developer but an opportunity to increase the developer’s productivity. Our efforts follow the path of Intelligence Augmentation proponents such as Doug Engelbart and Terry Winograd [1–3]. To that end, this approach combines inference with developer interaction to create robust solutions to program needs while maximizing developer productivity. While significant research has proceeded us across the range of computer-assisted program development, there still needs to be more progress on actual code generation. The challenge is to create a flexible, intuitive, and natural methodology for the developer. The tool should allow for synonyms, acronyms, abbreviations, and shorthand. It should allow flexibility in the structure of the information provided to it. Most importantly, it must deal with ambiguous and unrecognized content cleanly. Finally, the process must produce unambiguous and consistent results. Our approach meets all these requirements. Our goals are two-fold. We seek to provide novice developers with a tool and approach that allows them to be productive without the learning curve in existing programming approaches. At the same time, we seek to provide experienced developers with a methodology that improves their productivity. To this end, we have created a programming methodology named CABERNET. As part of our research, we have developed a tool to generate mobile applications based on this methodology. This paper will describe the methodology and example applications built with it. We believe that our approach provides a development platform that can produce deterministic results while allowing flexibility in the input and code, which is easy to understand and accessible for novices. This combination will provide the opportunity for significant improvements in programmer productivity and quality. 2. BACKGROUND The programming community continually looks for ways to improve the efficacy of those involved in program development. We define the measure of improvement in programmer efficacy or productivity as a reduction in the quantity of work required to produce a defect-free program or program function. Researchers have long sought to automate various aspects of the software development process. Today there are many tools and techniques available to help developers in their work. Some of these are discussed below. As we will see, few of these directly address code generation. 2.1. Machine Learning A recent area of activity is computer-assisted programming through machine learning. The approach involves training a tool with libraries or code repositories. Using this resource, the tool then provides code recommendations to the programmer. Examples of this include Natural Language-Guided Programming [4] and GitHub CoPilot [5], an OpenAI based code generation tool. Another approach in this area is Genetic programming [6]. Genetic programming is similar in usage to the other two solutions but is based on hand-coded training cases making it much more expensive to implement. These tools attempt to improve programmers’ productivity by providing coded solutions to portions of programs as the developer works. This appears to be a benefit to a programmer, particularly when working with a language or in a solution space with which they are not familiar. These tools have been described as AI Pairs Programming and an automatic code completion tool. Since GitHub CoPilot generates code based on examples collected from publicly available code on GitHub there has been some question about the quality of the result. There is no assurance that the source code is correct or efficient. A recent study [7] of the results of GitHub CoPilot generated code gives reason for concern. In this study, they found that in 28.7% of the problems, GitHub CoPilot generated the correct code. In 20.1% of the problems, GitHub CoPilot completely failed to provide a correct solution. If you combine the 51.2% of the time where the solution is partially correct with the totally correct solutions, you get 79.9% of the time where a solution would be helpful to a programmer. But these success rates are not such that a programmer can expect a correct solution without significant review and refinement. Another study of GitHub Copilot generated code [8] found code correctness ranged from 57% on Java examples to 27% on JavaScript examples. These results indicate that GitHub Copilot will likely be valuable tools for programmers in the future. However, given the questionable quality of the code source (GitHub public repository), there will continue to be a need for a close review of the results. Additionally, these are tools to aid programmers in their development efforts, not tools for creating programs. 2.2. Controlled Natural Languages A natural language programming language has long been a goal in the programming community. In 1983 Biermann, Ballard and Simgon introduced NLC [9,10], a natural language notation, which was interpreted directly to an output. In 1984 Knuth proposed Literate Programming [11], which combined TEX and Pascal to produce a vocabulary that had the primary goal of documenting for humans what the programmer desires. Literate Programming makes efforts to improve the readability of programs. However, it does this by adding English content to program code. The result is a program which is more verbose than the Pascal upon which it is built. In 2000 Price, Riloff, Zachary, and Harvey introduced Natural Java [12], a notation that allows the programmer to define a procedure in English, which is converted to Java. There are also efforts to use natural language techniques to analyze artifacts created in conventional programming languages. Michael Ernst suggested using these techniques to analyze all kinds of artifacts [13] including “. . . error messages, variable names, procedure documentation, and user questions.” Similarly, there have been efforts to define the user interface by extracting information from the natural language requirements documents [14]. Essentially this approach uses natural language tools and techniques to identify (and possibly satisfy) requirements for the program by analysis of the information that the developer has created to date. In 2001 Overmyer, et al. demonstrated the use of linguistics analysis to convert requirements documents to models of the subject requirements [15]. In another approach, [16], Landhaeusser and Hug attempt to use full English to derive program logic. English tends to be verbose, and a programming language based on the entire English language results in significant content being required. Our approach utilizes a Controlled version of English, which results in a simplified syntax. This simplified syntax allows the program to be created with a concise source document. Much of human interactions are dependent upon shared experience and idioms, which allow humans to provide incomplete information and enable the listener to fill in the rest. Without these implied nuances, human communications would be much more verbose. The challenge for using a controlled natural language for defining a computer program is that we must replicate, at least in part, these techniques which humans use to share information. 2.3. Requirements Capture Requirements capture is an area where Controlled Natural Language approaches have previously been used [17]. For some years, the agile development community has sought to develop better ways to capture user requirements. Test-Driven Development (TDD) [18] was initially associated with agile development in Kent Beck’s book on eXtreme Programming [19] and then expanded upon in his book on the subject [20]. This methodology seeks to direct the programming effort towards requirements as embodied in a series of tests. These tests are generated by the development team. More recently, parts of the agile community have embraced Behavior-driven Development (BDD) as a starting point. Behavior-driven Development [21, 22] describes the user’s requirements as a series of behaviors that can be converted into tests. These tests are then used as those envisioned in Test-driven Development. These behaviors are described in a natural language form. As such, BDD acts as a front-end for TDD. Cucumber [23, 24] and jBehave [25] are popular tools that allow developers to capture their requirements in an end-user-friendly format and produce a test suite for TDD applications. While these methodologies and associated tools enable the user to describe the requirements in a natural language format, they still require the program to be created in a traditional programming language. 2.4. Dynamic Programming Languages In recent years there has been significant growth in the use of dynamic programming languages for mainstream development. While Java, and C with their various derivatives, continue to be widely used, Python (ranked number one in the TIOBE index), JavaScript (and its derivative, TypeScript), PHP, Ruby, and Perl have moved into the top twenty most popular languages in the TIOBE Index [26] and the StackOverflow annual programmer survey [27]. Dynamic programming languages have gained a following because they have helped improve programmers’ productivity. The combination of dynamic typing and concise syntax results in fewer lines of code required to achieve the desired result. These advantages have led to claims of productivity gains from 5 to 10 times [28]. With the advent of robust, dynamically typed languages, developers have begun using these tools for applications previously thought to be the domain of traditional statically typed languages. These languages have a syntax that is easier for a programmer to understand, even if written by someone else. In general, the syntax used by these languages is closer to that of a natural language. They still do require conformance to a strict set of rules. However, they have limited the requirements for computer-driven structures like variable declarations, which add to a traditional programming language’s verbosity. While these languages’ use does not involve automation, they show that other cleaner, simpler syntax languages offer improved programmer productivity opportunities. 2.5. Static Analysis Static analysis tools come in a range of capabilities. The simplest of these tools are commonly referred to as lint tools [29]. These tools review the program code and identify violations of syntax rules provided for each target programming language. Violations can include punctuation, the misspelling of reserved words, variables declared but never used, and other errors that can be identified by reviewing the source code. In addition to stylistic checks, traditionally the approach of linters, these tools have taken more ambitious approaches, such as using bug patterns. Two of the most popular and successful products in the area are FindBug and PMD [30]. They have proved very useful in finding bugs in already-written code. They help improve the code quality but do not help create the code. 2.6. Integrated Development Environments The Integrated Development Environment (IDE) is the most used tool for developers. Among the many capabilities a modern IDE provides is syntax highlighting [31], which involves highlighting various constructs and keywords with colors and formatting to identify their function and usage. These tools can aid the programmer by identifying errors in code when the color coding of the source code does not match their intent. These features also include code completion [32], automatically completing various words and constructs within the program based on the context and previously entered code. Modern IDEs also provide for the integration of tools such as linters and other static analysis tools. While a modern IDE is a valuable productivity enhancer, it still requires that the programmer code the program in the target programming language’s particular syntax. 2.7. Declarative Syntax Imperative programming [33] is the style utilized by most of the popular programming languages. These languages require the programmer to describe how to construct the various objects that make up a program. To build a user interface, the program would include the tedious steps required to draw each object and then link them to the program logic. This process results in the code being voluminous and difficult to read. It also can obscure the nature of what the programmer is trying to achieve. Listing 1.1 contains the Swift code involved in creating a simple button that invokes a method called processEachPayThis. This example includes eleven lines of code. For all but the most knowledgeable, this code is hard to read and obscures the nature of the programmer’s goal. ```swift 1 let button2 = UIButton(type:.system) 2 button2.setTitle("Calculate", for:.normal) 3 button2.frame = CGRect(x:self.view.bounds.maxX * 0.0, 4 y:35 * 3, 5 width:self.view.bounds.maxX * 0.5, 6 Height:30) 7 button2.titleLabel?.textAlignment = .left 8 button2.addTarget(self, 9 action: #selector(processEachPayThis), 10 for: .touchDown) 11 self.view.addSubview(button2) ``` Listing 1.1. Swift code for Simple Button In 2019 Apple introduced SwiftUI [34], which utilizes a declarative syntax for describing the program’s user interface. Declarative syntax [35] describes the results the programmer wants to achieve but not how to achieve those results. Listing 1.2 includes the SwiftUI code required to create the same button as in the previous example but does it in seven lines of code. This code is easier to read and to understand what the programmer is trying to achieve. While this code is considerably simpler than the Swift code, it still is rigid in its syntax and contains numerous special words/commands. It requires the programmer to conform to a strict set of rules. As we describe CABERNET in this paper, we will see that it can describe this same button in two lines of code without these strict rules. 3. Our Approach The goal of our work is to provide a highly readable, flexible, extensible, and easy-to-learn development methodology based on a CNL. To that end, we have developed CABERNET (Code generation BasEd on contrRolled Natural language input), an approach that allows a programmer to define a computer program using a Controlled Natural Language (CNL). Figure 1 lists the key advantages of the CABERNET development approach. 3.1. Basic Principles The simplicity and directness of the approach are possible because many aspects of the design can be inferred from the context. A programmer developing an application for a mobile device seeks to conform to a set of user interface guidelines. These guidelines become one of the many contextual influences on the application design. As previously noted, one significant advantage enjoyed by humans in their use of natural language is the shared knowledge that allows for portions of the communications to be implied. To overcome this challenge in human-computer communications, we have utilized three techniques. - Increased programmer efficiency - Flexible and straightforward syntax - Address needs of all programmers - Natural language (English-like, controlled natural language) - More flexible - More forgiving - Inference fills in gaps First, we have used a broad set of defaults, applied when the developer omits the needed information from their descriptions. Second, we use inference to determine the developer’s intent from the information provided (both within the user interface description and other artifacts that make up the program). Third, our approach allows machine learning to adjust the defaults based on developer choices during the development process. When information is missing, or the information provided is ambiguous, we offer the developer options from which to choose a solution. Based on these choices and the default solutions that the developer accepts or declines, we build and reinforce our recommended solutions. The characteristics of the proposed Controlled Natural Language model are listed in Figure 2. The result is that CABERNET programs are consistently more concise than other similar approaches such as Literate Programming. Listing 1.2. SwiftUI code for Simple Button ```swift HStack { Button(action: { self.processEachPayThis() }) { Text("Calculate") } } ``` Fig. 1. Key Characteristics of CABERNET 3.2. Flexible Nomenclature One of the challenges of dealing with a natural language is the variety of words or phrases used for a single object or concept. To deal with this, we make use of a thesaurus. We identify a group of words or phrases that can be used interchangeably. Table 1 includes some examples of these lists of synonyms. These lists are just a small sample of possible synonyms that we should consider. Going one step further, we consider what may be implied by a word or phrase. For example, the last item in the list might include the word “to” as it could imply “go to.” These lists of synonyms are created in several ways. First, they are generated from our knowledge of the domain and the terminology used by programmers. Second, we can expand them using online resources like thesaurus.com, thesaurus.Babylon-software.com, etc. Third, we can use search to find terms that are common in the subject area. Finally, we can learn from the developer as they provide feedback when the CABERNET processor cannot interpret the term. 3.3. Declarative with a Difference We have seen the improvement in readability and productivity that is offered by declarative programming approaches like SwiftUI. CABERNET takes that concept further; it offers declarative with a difference. CABERNET combines a declarative style with a natural language-based syntax. It then utilizes inference to discern the programmer’s intent. We couple that with a robust set of defaults and templates to convert the program into a native executable. --- - Input language is forgiving - Outline-based structure - Flexible - Allow use of synonyms, acronyms, and standard abbreviations - Allow flexibility in ordering and location of descriptions - Terse - Minimum input required - In most cases, the input is keyword-based and does not require English sentences - Each bullet has limited context - Utilize popular Markdown [36] lightweight markup language - Model processing - Tool processes natural language model - Outputs canonical model - Offers alternative interpretations - Identifies ambiguous elements - Highlights unrecognized and unused elements - Canonical model - Unambiguous - Consistent with the natural language model and with itself - Can target alternate platforms (iOS, Android, etc.) - Tools - Predefined rules - Learn additional rules from experience - Learn from documentation of target framework --- Fig. 2. Characteristics of CABERNET CNL Figure 3 depicts the process of converting a CABERNET source into an executable program. The process starts by tokenizing the CABERNET source based on the structure of the Markdown outline. The tokenized version is then inspected for terms that can be matched with synonyms in the thesaurus. Where there are tokens that seem to be missing, they are added by inference. The resulting tokens or groups of tokens are identified as actions, symbols, formatting, etc. based on their context. The accuracy of that identification is then tested based on other objects in the program. Where appropriate, outline levels are then simplified using Natural Language tools. The CABERNET processor then generates code for the target platform by applying the appropriate templates. Finally, the program is compiled or interpreted by the target platform development tool. Table 1. CABERNET Synonym Examples. <table> <thead> <tr> <th>Widget Type</th> <th>Synonyms</th> </tr> </thead> <tbody> <tr> <td>Binary input widget</td> <td>“option,” “switch,” “checkbox”</td> </tr> <tr> <td>Application</td> <td>“App,” “Application,” “Program”</td> </tr> <tr> <td>Application Screen</td> <td>“Window,” “Screen,” “Scene”</td> </tr> <tr> <td>Switch State</td> <td>“true,” “selected,” “on”</td> </tr> <tr> <td>Load new screen</td> <td>“go,” “go to,” “load,” “to”</td> </tr> </tbody> </table> 4. NOTATION 4.1. Markdown The notation for the Controlled Natural Language tool is based on the Markdown [37, 38] lightweight markup language. Markdown was created in 2004 by John Gruber [39]. An additional benefit of Markdown as the underlying format of CABERNET is that the source code can be processed using the Markdown tool. The result is an attractively formatted file that displays the program structure without the Markdown tags and formatting characters. 4.2. Outline Structure A CABERNET program is structured as an outline, including only the information necessary to distinguish itself from the default. Some high-level outline properties that define the CABERNET syntax are identified in Figure 4. The outline structure captures the hierarchical structure of the program. Each succeeding indentation of the outline represents another embedded structure in the resulting program. The CNL code of an example application is found in Listing 1.3. Line 1 of this code identifies the basic application. Lines 2 and 17 are one level indented from the application and start two different screens. The lines such as 4, 5, and 7 that begin with “###” are one additional level indented and define the objects on the subject screen. Outline entries that start with a ‘*’ describe the content of the various objects. Entries such as 6 and 21 that start with a verb describe actions to be taken when clicking the object. Entries such as line 22 that begin with a characteristic describe the format of that field. Entries like that beginning on line 30 define a calculation that is used to populate the field. Lines like 8 and 12, which do not fall into other categories provide a default entry for the field. Listing 1.3. CABERNET Sample Code. 5. Thesaurus The use of a CNL means that multiple names can describe an object in the user interface. For example, in Listing 1.3, line 2, we refer to one screen of the application as a “Scene.” On line 17, we call the second screen as a “Screen.” Additionally, these objects can be called different things based on the target platform involved. As a result, the subject tool must create alignment between what the CNL code calls an object and what the target platform expects. To allow CABERNET to accommodate this varied nomenclature, we have implemented the concept of a thesaurus. The thesaurus captures a range of words that can be treated as synonyms. Examples of the thesaurus word lists are shown in Figure 1. 6. Natural Language Processing As noted, we have limited the description of the application content to the ‘*’ outline levels. Each of these outline items can contain brief entries that describe the content or the material’s format. These outline items are also where the controlled natural language entries exist for describing the program function and content. Each item is very limited in scope and context and is therefore relatively easy to interpret. For example, line 30 in Listing 1.3 describes the calculation of the value displayed in the object. <table> <thead> <tr> <th>Calculate Lot Width times Lot Depth by 43560</th> </tr> </thead> </table> Calculated items like this are identified by mathematical operators’ precedents such as multiply, divided by, plus, numbers, and mathematical symbols. <table> <thead> <tr> <th>Divide Lot Width times Lot Depth by 43560</th> </tr> </thead> </table> Once an item is identified as potentially being a mathematical calculation, it is further evaluated to see if all the information needed is present to evaluate the item. First, the items are parsed to identify the names of objects in the code that contain the inputs to the calculation. In this example, these include Lot Width, and Lot Depth. The remaining text is then examined for adverbs such as quickly, precisely, and carefully and articles such as the, a, and an, which do not add to our understanding of the calculation being performed. At this point, we should have all the information we need to evaluate the calculation. The biggest challenge to evaluating the remaining text is to understand how to group the calculation. Mathematical expressions are usually evaluated from left to right adjusted by precedence rules and grouping defined by parenthesis. Our tool uses all of these, but it must also consider grouping defined by the natural language of the statement. In its simplest form, this could include “a times b,” “a * b,” or “multiply a times b.” All three of these statements are equivalent and do require any special consideration of the grouping of the items. A more complicated example could involve “(a + b + c) / d,” “divide a plus b plus c by d,” “divide the sum of a and b and c by d,” or “(a plus b + c) divided by d”. This last example will have a different result than “a plus b plus c divided by d” which would be the same as “a + b + (c / d)”. By considering the grouping provided by English statements of the forms “Divide...expression...by...expression”, “Multiply...expression...times...expression” or “Sum of...expressions,” we can properly evaluate the calculations described in the natural language of these expressions. In this example, we need to determine to which values the “divide” at the beginning of the line applies. If the programmer had entered Width rather than Lot Width or Depth rather than Lot Depth, we would have failed to complete the transformation. However, this is an example of where we would have prompted the programmer for guidance. These would be an example of where the transformation was close, and we would have suggested to the programmer a possible match. In some cases, an item will include mathematical symbols or appear to describe a calculation, but CABERNET cannot convert it to a mathematical expression. Lines 12, and 14 are examples of this. These lines contain mathematical symbols, but the other text does not contain object names, so we cannot translate them into formulas. A mathematical calculation is but one type of item described in a CABERNET outline item. Using the same approach CABERNET can evaluate a wide range of program constructs. The steps in the process are as shown in Figure 5 This approach can be used for a wide range of programming constructs. By combining items such as database queries, logic statements, mathematical expressions, graphic generation, and file manipulation, we can generate a working program. Figure 6 represents the output of our example application. In Figure 6(a), we have the entry screen for a real estate application. The App, Scene and Screen bullets are for organization and are used to separate the application by screens. “Calculate” and “Acreage Calculator” are actions and become buttons. The descriptions of the actions taken for each of the tappable objects are listed as sub-bullets. Next comes multiple blank fields for the owner’s name, city, state, and zip code. Finally, there is the options field represented by switch objects. Depending upon the platform targeted, these could alternately be check- boxes. In this case, the option field is selected by default. Likewise, they could be called switches in the CNL instead of being called options. These alternate names for this object are but one example of how an object can be called multiple things in the CNL or could have multiple objects implemented based on the given CNL. As described above, these choices are made or prioritized based on the developer or target platform preferences. Figure 6(b) is the second screen of the application and includes an acreage calculator. This screen contains two blanks filled with the lot width and lot depth. Finally, there is a calculated field representing the size of the lot in acres. As previously described, the text defines this final field in lines 29 through 30. This calculation is triggered by tapping the “Calculate” button described in lines 20 through 22. 7. EVALUATION 7.1. Advantages and Limitations Much of the approach’s power comes from the flexibility of nomenclature. This flexibility comes from the use of thesaurus, which allows for alternate terms to describe objects and properties within the application. Much of this information is generated based on general domain knowledge. The approach also allows for expanding and customizing this information by applying search techniques to the target development platform’s documentation / APIs. Using search techniques to index this platform documentation, we can expand and improve the dictionaries and thesaurus used to interpret the CNL input. (a) Real Estate App (b) Acreage Calculator Fig. 6. App example screens. Among other advantages, our approach is well suited to integrate with agile processes. The CNL source code is self-documenting since it is written in a human-readable/understandable form. This human-readable format makes it easy to understand and refactor as needed. The result is a dual-purpose artifact (documentation and source code). The implementation is in the form of a domain-specific programming language. Our CNL is not intended to be a general-purpose language like Attempto English. As a result, the proposed syntax is concise and lends itself to the proposed application of inference and machine learning. While the example provided in this research involves mobile development, the approach is well-suited for a broad range of programming applications. 7.2. Code Size The key process metrics that we seek to address with CABERNET include code development speed, clarity, and size. As can be seen from the example CABERNET programs are very concise. Because they rely heavily on inference, the alignment between how the programmer and the computer understand the program is strong. To evaluate CABERNET we have compared its code with the alternate ways of implementing some iPhone applications. The program in Listing 1.3 is one of the examples used in this comparison. While it took 30 lines of code to implement this application in CABERNET the same program took 211 and 96 lines of code in Swift and SwiftUI respectively. Note that these line counts do not include lines that include only brackets and spaces. Table 2 shows a comparison of the code required by each of the languages to create this program and the other two examples. The second example involves adding highlighting to one of the fields based on the content of the field. This adds 10 lines of code to the SwiftUI program but only one line of code to the CABERNET program. As we can see, Swift requires over seven times as many lines of code as does CABERNET to implement these screens. While the SwiftUI implementation is shorter than the Swift implementation, it still requires more than 3 times as many lines of code as does CABERNET. <table> <thead> <tr> <th>Example</th> <th>Lines of Code</th> <th>CABERNET</th> <th>Swift</th> <th>SwiftUI</th> </tr> </thead> <tbody> <tr> <td>Real Estate App</td> <td>30</td> <td>211</td> <td>96</td> <td></td> </tr> <tr> <td></td> <td>Comparison</td> <td>1X</td> <td>7.3X</td> <td>3.3X</td> </tr> <tr> <td>Real Estate App Revised</td> <td>31</td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td>Comparison</td> <td>1X</td> <td>106</td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Tip Calculator</td> <td>14</td> <td>104</td> <td>57</td> <td></td> </tr> <tr> <td></td> <td>Comparison</td> <td>1X</td> <td>7.4X</td> <td>4.1X</td> </tr> </tbody> </table> Much of this Swift and SwiftUI code implements things that CABERNET handles as default values and constructs. One clear example of this is the actual calculation of the acreage value. In the CABERNET version, the calculation is defined in line 30. In addition, line 21 describes the action to be taken when we tap the subject button. To perform the same calculation in Swift, we need to include 3 lines to declare the variables involved, 9 lines to create the button, and 6 lines to perform the actual calculation. That is a total of 18 lines of code. For the SwiftUI implementation, we have 3 lines for declaring the variables, 4 lines to define the button, and 4 lines for the method to perform the calculation. This is a total of 11 lines of code. Again, these line counts do not include any lines which include brackets. The Swift and SwiftUI code must check for common errors like dividing by zero and blank entry fields in addition to the steps required to describe the screen features. CABERNET performs these functions by default, thus eliminating the need to check for these things. If there were a reason to allow a program to divide by zero or perform a calculation using an empty field, then CABERNET would expect the programmer to say so and describe how it should be handled. In the absence of such descriptions, CABERNET assumes that these are errors and handles them appropriately. The result of these aspects of CABERNET is that the source document includes only the basic description of the program content. The implementation, error handling and other processes normally included in a program’s source file are all added by the templates and processing done by the CABERNET tool. The result is that the CABERNET file is brief and easy to understand. Across these examples, Swift required about 7.3 times as many lines of code as CABERNET. In the same examples, SwiftUI required between 3.3 and 4.1 times as many lines of code as CABERNET. This is a significant difference that results in more opportunities for typos and errors to be introduced. At this point, we should note that CABERNET is more forgiving with the input provided. As previously noted, CABERNET allows a significant range of word selection in its programs. On the other hand, Swift and SwiftUI require strict adherence to the program structure. The combination of longer programs and strict rules make Swift and SwiftUI more vulnerable to errors. 8. Easy to Understand Lines of code are but one means of measuring the effort required to create a program. Additional measurements involve how difficult it is to craft the code, how readable the code is, and how well the program processing the code deals with alternative inputs. These all contribute to how easy it is for a programmer to learn the language. The measurement of these aspects of the language is more subjective than the simple counting of lines of code. Nevertheless, they are all important to understanding how successful CABERNET is / can be in improving programmer productivity. Fig. 7. Ease of Understanding To understand the relative ease of understanding a program written in CABERNET vs. the same program written in Swift or SwiftUI we surveyed 47 people. These survey participants were solicited from undergraduate and graduate students at two major University computer science programs. When asked about their level of programming ability 31 participants self-identified as a “Student”, 7 as a “Developer” and 1 as a “Novice”. When asked about their years of programming experience 33 reported 3 or more years of experience and 6 reported 2 or fewer. The survey participants were provided sample programs implemented in CABERNET, Swift and SwiftUI. Of the 35 questions included in the survey, there are 8 which ask the participants to evaluate how Easy to Understand the various samples were. The results of these questions are included in Figure 7. The figure graphs the percentage of responses at a given rating on a scale of zero to ten with ten being the easiest to understand. 80% of the ratings on CABERNET were a 7 or better. On the same basis, the Swift and SwiftUI examples were 46% and 52% were rated 7 respectively. Table 3. Mean Ease of Understanding Scores <table> <thead> <tr> <th>Language</th> <th>Overall</th> <th>Developers</th> <th>Students</th> </tr> </thead> <tbody> <tr> <td>CABERNET</td> <td>7.75</td> <td>8.29</td> <td>7.62</td> </tr> <tr> <td>Swift</td> <td>5.53</td> <td>4.64</td> <td>5.69</td> </tr> <tr> <td>SwiftUI</td> <td>6.26</td> <td>5.81</td> <td>6.35</td> </tr> </tbody> </table> The mean score for CABERNET on these questions was 7.75. The mean score for Swift and SwiftUI were 5.53 and 6.26 respectively. The chart in Figure 8 shows the actual responses for CABERNET and SwiftUI. From this, you can see that while the SwiftUI results form a normal distribution around it’s mean the CABERNET results have a more single-sided distribution. 48% of the responses for CABERNET are a rating of either a 9 or 10. ![Figure 8. CABERNET Responses vs SwiftUI](image) If we consider the groups that self-identify as Students and Developers independently, we get comparable results. The developers gave the CABERNET examples a mean score of 8.29 on the Easy-to-Understand questions. The students gave CABERNET a mean score of 7.62 on these same questions. On the other hand, the developers gave Swift and SwiftUI mean scores of 4.64 and 5.81 respectively. The students gave Swift and SwiftUI mean scores of 5.69 and 6.35 respectively. All these values can be seen in Table 3. 9. RESPONDENT FEEDBACK In addition to the quantitative responses based on examples, application survey respondents were offered the opportunity to provide comments about the various programming options. In total there were 52 comments submitted. Some of these responses had to do with the mechanics of the survey itself rather than the tools or were general. Twenty-one were simply positive comments about CABERNET. Seventeen of the comments expressed concern about the granularity of control provided by CABERNET. A couple of representative examples of these types of comments are as follows. “It is much more readable in terms of figuring out what it is doing and judging what the result will look like. However, it seems harder if I wanted to make something specific, because I wouldn’t know where to start with getting the right syntax.” “I think this would be a good tool for quick form or mock-up creation, but there are many things I wonder about it. As for these examples - can I change the size of the entry boxes? Can I move fields on the screen? How would function look? I am intrigued but scared since so much of the “brains” dictating things is hidden.” “Cabernet is good for a quick solution. The other two are good if you want more specific options and to understand the development tools.” These respondents were concerned that they would not be able to achieve precise control over the end application. In a couple of cases, they equated this with the approach being more suitable for end-user programming. While it is possible to write the requirements for precision control of the resulting application the respondent seemed to want more surety that they know how CABERNET will interpret their input. 10. RELATED WORK 10.1. Programmer Productivity The underlying goal of our research is to improve the productivity of program developers. Of course, the first challenge is to define what we mean by productivity. We view productivity as the quantity of defect-free functionality a developer can produce per unit of time or effort. How do we evaluate that productivity? It is a common belief that productivity varies between program developers by as much as 10:1. In his research, William Nichols [40] showed that the relationship between programmers and productivity was weak. He found a high degree of variability in programmer productivity across a range of tasks. In comparing developers’ performance on a range of tasks, only half of the variation could be attributed to the differences between programmers. Lutz Prechelt [41] highlights the wide variety of things that affect the program development process’s overall productivity. The choice of programming language [42] is a significant element in determining the productivity of the overall process. These evaluations involve comparing traditional languages like C and Java vs. scripting languages like Python and Perl. This work found that the scripting languages resulted in shorter programs and shorter development efforts. At the same time, the run-time performance did not suffer because of using scripting languages. 10.2. Next Paradigm Programming Languages Yannis Smaragdakis [43] considered how next-generation programming languages will change to support significant productivity improvements. This research is based on the author’s experiences developing using DataLog (a declarative language based on ProLog). His conclusions are heavily influenced by the belief that future languages will depend upon the compiler (or interpreter) to perform the heavy lifting behind the scenes. The programmer will specify their desired result in the programming language, and the tool (compiler or interpreter) will determine the methods required to achieve those goals. This conclusion aligns with our approach for CABERNET. 10.3. Natural Programming Languages To date, none of the efforts to use natural language as a programming language have been accepted by mainstream programming applications. Good and Howland [44] explored the use of natural languages for teaching programming or computational thinking. This research involves a study of the role-playing game toolkit for Neverwinter Nights 2. The program as shipped allows the creation of scenarios using NWScript, an Electron tool-set-based programming tool. The researchers studied users’ programming with NWScript and then using natural-language-based input. They evaluated the ability of non-programmers to script events using NWScript and natural language. They found that none of the users were able to script their events with the NWScript tool successfully. When using unconstrained natural language, they found significant confusion about how to formulate input. After several iterations of more constrained input methods, their final solution involved a hybrid graphical-textual-based programming tool. Our approach also recognizes that unconstrained natural language can be confusing for users. However, rather than taking the hybrid approach proposed by Good and Howland, we chose to implement a more focused application of natural language, which allows us to make inferences by the context of the individual natural language phrases. Gao [45] presents a survey of Controlled Natural Languages (CNL) used for machine-oriented applications. This work includes consideration of Attempto Controlled English (ACE), Processable English (PENG,) and Computer-processable English (CPL). From this research, these CNLs make their inputs very constrained and impose a rigid set of rules. These limitations are necessary to enable direct translation to machine-processable logic. The result is a much less natural syntax that imposes rules not that dissimilar to traditional programming languages. Wang, Ginn, et al. [46] have applied the concept of a language that learns from the programmers to Natural Language input. In this way, the program compiler/interpreter continually learns from the programmer to the point where most of the programs in their research were based on this user-defined notation. This work demonstrates how natural language programming can be effective when it grows based on user input. In CABERNET, we start with a natural language interpreter and allow that interpreter to grow and improve based on programmer input, much like the Dependency-based Action Language (DAL) in Wang, Ginn et al.’s research. 10.4. Code Snippets One of the most common processes used by developers is searching online development resources to identify approaches to solving specific problems. StackOverflow is a site frequented by many programmers seeking to find answers to their programming questions. Yan et al. [47] created CosBench which takes natural language input and searches for code snippets that are relevant to the search criteria. They compared their results with six other tools attempting to do the same thing. In a survey article, Allamanis et al. [48] identified a depth of work undertaking this same approach. These are interesting tools but are not programming methodologies in themselves. 11. Conclusion This work has shown that there is potential for a CNL-based programming tool. CABERNET has demonstrated a programming approach that is easy for both developers and students to understand. The tool is significantly more concise than the present common programming techniques for mobile device development. It also incorporates common error-checking techniques without burdening the developer with their implementation. REFERENCES S. Leonard, The text/markdown Media Type, 2016. C. Tomer, Lightweight Markup Languages, 2015. T. Gao, Controlled Natural Languages and Default Reasoning, 2019. **AUTHORS** **Howard Dittmer** received his MS in Software Engineering from DePaul University, and he received a BS in mechanical engineering from Virginia Tech. Currently, he is pursuing his PhD in Computer Science from DePaul University. His research interests include Software Engineering. **Xiaoping Jia** received his undergraduate degree and Master’s degree in Computer Science from Fudan University, Shanghai, China. He received his Ph.D. in Computer Science from Northwestern University. He is currently the Director of Institute for Software Engineering at DePaul University. His research interests include Software Engineering, Systems Development, Programming Languages.
{"Source-Url": "https://aircconline.com/csit/papers/vol13/csit131201.pdf", "len_cl100k_base": 9899, "olmocr-version": "0.1.46", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 47339, "total-output-tokens": 12620, "length": "2e13", "weborganizer": {"__label__adult": 0.0004374980926513672, "__label__art_design": 0.00028777122497558594, "__label__crime_law": 0.0002524852752685547, "__label__education_jobs": 0.0012264251708984375, "__label__entertainment": 5.65648078918457e-05, "__label__fashion_beauty": 0.0001633167266845703, "__label__finance_business": 0.00018417835235595703, "__label__food_dining": 0.00033736228942871094, "__label__games": 0.00044798851013183594, "__label__hardware": 0.000579833984375, "__label__health": 0.0003223419189453125, "__label__history": 0.00016868114471435547, "__label__home_hobbies": 8.249282836914062e-05, "__label__industrial": 0.0002636909484863281, "__label__literature": 0.00027871131896972656, "__label__politics": 0.0002005100250244141, "__label__religion": 0.0004405975341796875, "__label__science_tech": 0.0025959014892578125, "__label__social_life": 0.00010335445404052734, "__label__software": 0.002979278564453125, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.0003056526184082031, "__label__transportation": 0.00046443939208984375, "__label__travel": 0.0001838207244873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54473, 0.03733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54473, 0.65712]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54473, 0.91422]], "google_gemma-3-12b-it_contains_pii": [[0, 2863, false], [2863, 6496, null], [6496, 10085, null], [10085, 13892, null], [13892, 16826, null], [16826, 19268, null], [19268, 21773, null], [21773, 23895, null], [23895, 24929, null], [24929, 28418, null], [28418, 30638, null], [30638, 32149, null], [32149, 35957, null], [35957, 38453, null], [38453, 40616, null], [40616, 43830, null], [43830, 47521, null], [47521, 51473, null], [51473, 54473, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2863, true], [2863, 6496, null], [6496, 10085, null], [10085, 13892, null], [13892, 16826, null], [16826, 19268, null], [19268, 21773, null], [21773, 23895, null], [23895, 24929, null], [24929, 28418, null], [28418, 30638, null], [30638, 32149, null], [32149, 35957, null], [35957, 38453, null], [38453, 40616, null], [40616, 43830, null], [43830, 47521, null], [47521, 51473, null], [51473, 54473, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54473, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54473, null]], "pdf_page_numbers": [[0, 2863, 1], [2863, 6496, 2], [6496, 10085, 3], [10085, 13892, 4], [13892, 16826, 5], [16826, 19268, 6], [19268, 21773, 7], [21773, 23895, 8], [23895, 24929, 9], [24929, 28418, 10], [28418, 30638, 11], [30638, 32149, 12], [32149, 35957, 13], [35957, 38453, 14], [38453, 40616, 15], [40616, 43830, 16], [43830, 47521, 17], [47521, 51473, 18], [51473, 54473, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54473, 0.10442]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
34a5e688e69ff32bf83330b54047e919b4c2b911
Trade-Offs in Continuous Integration: Assurance, Security, and Flexibility Michael Hilton Oregon State University, USA mihilton@cmu.edu Nicholas Nelson Oregon State University, USA nelsonni@oregonstate.edu Darko Marinov University of Illinois, USA marinov@illinois.edu Timothy Tunnell University of Illinois, USA tunnell2@illinois.edu Danny Dig Oregon State University, USA digd@oregonstate.edu ABSTRACT Continuous integration (CI) systems automate the compilation, building, and testing of software. Despite CI being a widely used activity in software engineering, we do not know what motivates developers to use CI, and what barriers and unmet needs they face. Without such knowledge, developers make easily avoidable errors, tool builders invest in the wrong direction, and researchers miss opportunities for improving the practice of CI. We present a qualitative study of the barriers and needs developers face when using CI. We conduct semi-structured interviews with developers from different industries and development scales. We triangulate our findings by running two surveys. We find that developers face trade-offs between speed and certainty (Assurance), between better access and information security (Security), and between more configuration options and greater ease of use (Flexibility). We present implications of these trade-offs for developers, tool builders, and researchers. CCS CONCEPTS • Software and its engineering → Agile software development; Software testing and debugging; KEYWORDS Continuous Integration, Automated Testing 1 INTRODUCTION Continuous integration (CI) systems automate the compilation, building, and testing of software. CI usage is widespread throughout the software development industry. For example, the “State of Agile” industry survey [51], with 3,880 participants, found half of the respondents use CI. The “State of DevOps” report [33], a survey of over 4,600 technical professionals from around the world, finds CI to be an indicator of “high performing IT organizations”. We previously reported [19] that 40% of the 34,000 most popular open-source projects on GitHub use CI, and the most popular projects are more likely to use CI (70% of the top 500 projects). Despite the widespread adoption of CI, there are still many unanswered questions about CI. In one study, Vasilescu et al. [50] show that CI correlates with positive quality outcomes. In our previous work [19], we examine the usage of CI among open-source projects on GitHub, and show that projects that use CI release more frequently than projects that do not. However, these studies do not present what barriers and needs developers face when using CI, or what trade-offs developers must make when using CI. To fill in the gaps in knowledge about developers’ use of CI, we ask the following questions: What needs do developers have that are unmet by their current CI system(s)? What problems have developers experienced when configuring and using CI system(s)? How do developers feel about using CI? Without answers to these questions, developers can potentially find CI more obstructive than helpful, tool builders can implement unneeded features, and researchers may not be aware of areas of CI usage that require further examination and solutions that can further empower practitioners. To answer these questions, we employ complementary established research methodologies. Our primary methodology is interviews with 16 software developers from 14 different companies of all sizes. To triangulate [15] our findings, we deploy two surveys. The Focused Survey samples 51 developers at Pivotal1. The Broad Survey samples 523 participants, of which 95% are from industry, and 70% have seven or more years of software development experience. The interviews provide the content for the surveys, and the Focused Survey provides depth, while the Broad Survey provides breadth. Analyzing all this data, we answer four research questions: RQ1: What barriers do developers face when using CI? (see §4.1) RQ2: What unmet needs do developers have with CI tools? (see §4.2) RQ3: Why do developers use CI? (see §4.3) RQ4: What benefits do developers experience using CI? (see §4.4) 1 pivotal.io Based on our findings, we identify three trade-offs developers face when using CI. Other researchers [32, 34, 53] have identified similar trade-offs in different domains. We name these trade-offs Assurance, Security, and Flexibility. Assurance describes the trade-off between increasing the added value that extra testing provides, and the extra cost of performing that testing. Rothermel et al. [34] identify this trade-off as a motivation for test prioritization. Security describes the trade-off between increased security measures, and the ability to access and modify the CI system as needed. Post and Kagan [32] found a third of knowledge workers report security restrictions hinder their ability to perform their jobs. We observe this issue also applies to CI users. Flexibility describes the trade-off that occurs when developers want systems that are both powerful and highly configurable, yet at the same time, they want those systems to be simple and easy to use. Xu et al. [53] identify the costs of over-configurable systems and found that these systems severely hinder usability. We also observe the tension from this trade-off among developers using CI. In the context of these three trade-offs, we present implications for three audiences: developers, tool builders, and researchers. For example, developers face difficult choices about how much testing is enough, and how to choose the right tests to run. Tool builders should create UIs for CI users to configure their CI systems, but these UIs should serialize configurations out to text files so that they can be kept in version control. Researchers have much to bring to the CI community, such as helping with fault localization and test parallelization when using CI, and examining the security challenges developers face when using CI. This paper makes the following contributions: 1. We conduct exploratory semi-structured interviews with 16 developers, then triangulate these findings with a Focused Survey of 51 developers at Pivotal and a Broad Survey of 523 developers from all over the world. 2. We provide an empirically justified set of developers’ motivations for using CI. 3. We expose gaps between developers’ needs and existing tooling for CI. 4. We present actionable implications that developers, tool builders, and researchers can build on. The interview script, code set, survey questions, and responses can be found at http://cope.eecs.oregonstate.edu/CI_Tradeoffs 2 BACKGROUND The idea of Continuous Integration (CI) was first introduced [6] in the context of object-oriented design: “At regular intervals, the process of continuous integration yields executable releases that grow in functionality at every release...” This idea was then adopted as one of the core practices of Extreme Programming (XP) [3]. The core premise of CI, as described by Fowler [14], is that the more often a project integrates, the better off it is. CI systems are responsible for retrieving code, collecting all dependencies, compiling the code, and running automated tests. The system should output “pass” or “fail” to indicate whether the CI process was successful. We asked our interview participants to describe their CI usage pipeline. While not all pipelines are the same, they generally share some common elements. Changesets are a group of changes that a developer makes to the code. They may be a single commit, or a group of commits, but they should be a complete change, so that after the changeset is applied, it should not break the program. When a CI system observes a change made by developers, this triggers a CI event. How and when the CI is triggered is based on how the CI is configured. One common way to trigger CI is when a commit is pushed to a repository. For the CI to test the code without concern for previous data or external systems, it is important that CI runs in a clean environment. The automated build script should be able to start with a clean environment and build the product from scratch before executing tests. Many developers use containers (e.g., Docker) to implement clean environments for builds. An important step in the CI pipeline is confirming that the changeset was integrated correctly into the application. One common method is a regression test suite, including unit tests and integration tests. The CI system can also perform other analyses, such as linting or evaluating test coverage. The last step is to deploy the artifact. We found some developers consider deployment to be a part of CI, and others consider continuous deployment (CD) to be a separate process. 3 METHODOLOGY Inspired by established guidelines [24, 28, 31, 41, 48], the primary methodologies we employ in this work are interviews with software developers and two surveys of software developers to triangulate [15] our findings. Interviews are a qualitative method and are effective at discovering the knowledge and experiences of the participants. However, they often have a limited sample size [41]. Surveys are a quantitative technique that summarizes information over a larger sample size and thus provides broader results. Together, they provide a much clearer picture than either can provide alone. We first use interviews to elicit developers experiences and expectations when working with CI, and we build a taxonomy of barriers, unmet needs, motivations, and experiences. We build a survey populating the answers to each question with the results of the interviews. We deploy this survey at Pivotal, a software and services company, that also develops a CI system, Concourse. To gain an even broader understanding, we also deploy another survey via social media. The interview script, code set, survey questions, and the responses can be found on our companion site. 3.1 Interviews We used semi-structured interviews “which include a mixture of open-ended and specific questions, designed to elicit not only the information foreseen, but also unexpected types of information” [41]. We developed our interview script by performing iterative pilots. We initially recruited participants from previous research, and then used snowball sampling to reach more developers. We interviewed 16 developers from 14 different companies, including large \(^2\)docker.io \(^3\)concourse.ci software companies, CI service companies, small development companies, a telecommunications company, and software consultants. Our participants had over eight years of development experience on average. We assigned each participant a subject number (Table 1). They all used CI, and a variety of CI systems, including Concourse\(^1\), Jenkins\(^9\), TravisCI\(^5\), CruiseControl.NET\(^6\), CircleCI\(^6\), TeamCity\(^8\), XCodeBots\(^5\), Buildbot\(^10\), Wercker\(^11\), appVeyor\(^12\), and proprietary CI systems. Each interview lasted between 30 and 60 minutes, and the participants were offered a US$50 Amazon gift card for participating. The interviews were based on the research questions presented in Section 1. The following are some examples of the questions that we asked in the interview: - Tell me about the last time you used CI. - What tasks prompt you to interact with your CI tools? - Comparing projects that do use CI with those that don’t, what differences have you observed? - What, if anything, would you like to change about your current CI system? We coded the interviews using established guidelines from the literature \[42\], and followed the guidance from Campbell et al. \[7\] on specific issues related to coding semi-structured interview data, such as segmentation, codebook evolution, and coder agreement. The first author segmented the transcript from each interview by units of meaning \[7\]. The first two authors then collaborated on coding the segmented interviews, using the negotiated agreement technique to achieve agreement \[7\]. Negotiated agreement is a technique where both researchers code a single transcript and discuss their disagreements in an effort to reconcile them before continuing on. We coded the first eight interviews together using this negotiated agreement technique. Because agreement is negotiated along the way, there is no inter-rater agreement number. After the eighth interview, the first and second author independently coded the remaining interviews. Our final codebook contained 25 codes divided into 4 groups: demographics, systems/tools, process, and human CI interaction. The full codeset is available on our companion site. ### Table 1: Interview participants <table> <thead> <tr> <th>Subject</th> <th>Exp.</th> <th>Domain</th> <th>Org. Size</th> </tr> </thead> <tbody> <tr> <td>S1</td> <td>8 yrs.</td> <td>Content Platform Provider</td> <td>Small</td> </tr> <tr> <td>S2</td> <td>20 yrs.</td> <td>Content Platform Provider</td> <td>Small</td> </tr> <tr> <td>S3</td> <td>4 yrs.</td> <td>Developer Tools</td> <td>Large</td> </tr> <tr> <td>S4</td> <td>10 yrs.</td> <td>Framework Development</td> <td>Large</td> </tr> <tr> <td>S5</td> <td>10 yrs.</td> <td>Content Management</td> <td>Large</td> </tr> <tr> <td>S6</td> <td>10 yrs.</td> <td>Computer Security Startup</td> <td>Small</td> </tr> <tr> <td>S7</td> <td>5 yrs.</td> <td>Framework Development</td> <td>Small</td> </tr> <tr> <td>S8</td> <td>5 yrs.</td> <td>Media Platform</td> <td>Medium</td> </tr> <tr> <td>S9</td> <td>6 yrs.</td> <td>Language Development</td> <td>Medium</td> </tr> <tr> <td>S10</td> <td>9 yrs.</td> <td>CI Platform Development</td> <td>Medium</td> </tr> <tr> <td>S11</td> <td>6 yrs.</td> <td>Software Development Consulting</td> <td>Medium</td> </tr> <tr> <td>S12</td> <td>10 yrs.</td> <td>CI Platform Development</td> <td>Small</td> </tr> <tr> <td>S13</td> <td>12 yrs.</td> <td>Telecommunications</td> <td>Large</td> </tr> <tr> <td>S14</td> <td>5 yrs.</td> <td>Software Development Consulting</td> <td>Medium</td> </tr> <tr> <td>S15</td> <td>2 yrs.</td> <td>Infrastructure Management</td> <td>Medium</td> </tr> <tr> <td>S16</td> <td>8 yrs.</td> <td>Cloud Software Development</td> <td>Medium</td> </tr> </tbody> </table> ### 3.2 Survey We created a survey with 21 questions to quantify the findings from our semi-structured interviews. The questions for the survey were created to answer our research questions, focusing on what benefits, barriers, and unmet needs developers have when using CI. The survey consisted of multiple choice questions, with a final open-ended text field to allow participants to share any additional information about CI. The answers for these multiple choice questions were populated from the answers given by interview participants. We ensured completeness by including an “other” field where appropriate. To prevent biasing our participants, we randomized the order of answers in multiple-choice questions. **Focused Population** We deployed our survey to a focused population of developers at Pivotal. Pivotal embraces agile development and also sponsors the development of Concourse CI. We sent our survey via email to 294 developers at Pivotal, and we collected 51 responses for a response rate of 17.3%. All respondents from Pivotal reported using CI. **Broad Population** We believe there are many voices among software developers, and we wanted to hear from as many of them as possible. We chose our sampling method for the Broad Survey to reach as many developers as possible. We recruited participants by advertising our survey on social media (Facebook, Twitter, and reddit). As with all survey approaches, we were forced to make certain concessions \[5\]. When recruiting participants online, we can reach larger numbers of respondents, but in doing so, results suffer self-selection bias. To maximize participation, we followed guidelines from the literature \[42\], including keeping the survey short and raffling one US$50 Amazon gift card to survey participants. We collected 523 complete responses, and a total of 691 survey responses, from over 30 countries. Over 50% of our participants had over 10 years of software development experience, and over 80% had over 4 years experience. ### 4 ANALYSIS OF RESULTS #### 4.1 Barriers We answer **What barriers do developers face when using CI? (RQ1)** We collected a list of barriers which prevent CI adoption and use of CI that our interview participants reported experiencing when using CI. We asked our survey participants to select up to three problems that they had experienced. If they had experienced more than three, we asked them to choose the three most common. **B1 Troubleshooting a CI build failure**. When a CI build fails, some participants begin the process of identifying why the build failed. Sometimes, this can be fairly straightforward. However, for some build failures on the CI server, where the developer does not have the same access as they have when debugging locally, troubleshooting the failure can be quite challenging. S4 described one such situation: *If I get lucky, I can spot the cause of the problem right from the results from the Jenkins reports, and if not, then it becomes more complicated.* Another participant described how Wercker saves a container from 13 of tests (e.g., acceptance tests) to the CI builds. Dependency issues during the build process, and adding different styles because of bugs in their build tools, problems with caching, dependency issues during the build process, and adding different styles because of bugs in their build tools, problems with caching, and too much I/O. That’s the worst case scenario for me, when it is a slow creep. Other participants told us they had seen build times increase because of bugs in their build tools, problems with caching, dependency issues during the build process, and adding different styles of tests (e.g., acceptance tests) to the CI builds. Fowler [14] suggests most projects should try to follow the XP guideline of a 10-minute build. When we asked our Broad Survey participants what is the maximum acceptable time for a CI build to take, the most common answer was also 10 minutes, as shown in Figure 1. Many of our interview participants reported having spent time and effort reducing the build time for their CI process. S15 said: When the build takes too long to run, we start to evaluate the tests, and what do we need to do to speed up the environment to throw more tests in the given amount of time. ... Mostly I feel that CI isn’t very useful if it takes too long to get the feedback. When we asked our survey participants, 96% of Focused Survey participants and 78% of Broad Survey participants said they had actively worked to reduce their build times. This shows long build times are a common barrier faced by developers using CI. B3 Automating the build process. CI systems automate the manual process that developers previously followed when building and testing their code. The migration of these manual processes to automated builds requires that developers commit time and resources before the benefits of CI can be realized. B4 Lack of support for the desired workflow. Interview participants told us that CI tools are often designed with a specific workflow in mind. When using a tool to implement a CI process, it can be difficult to use if one is trying to use a different workflow than the one for which the tool was designed. For example, when asked how easy it is to use CI tools, S2 said: Absolutely [our build times grow over time]. Worst case scenario it creeps with added dependencies, and added sloppy tests, and too much I/O. That’s the worst case scenario for me, when it is a slow creep. To dig a little deeper, we examined in-depth what developers meant by overly long build times. S9 said: My favorite way of thinking about build time is basically, you have tea time, lunch time, or bedtime. Your builds should run in like, 5-ish minutes, however long it takes to go get a cup of coffee, or in 40 minutes to 1.5 hours, however long it takes to go get lunch, or in 8-ish hours, however long it takes to go and come back the next day. Table 3: Developer needs unmet by CI <table> <thead> <tr> <th>Need</th> <th>Broad</th> <th>Focused</th> </tr> </thead> <tbody> <tr> <td>N1 Easier configuration of CI servers or services</td> <td>52%</td> <td>32%</td> </tr> <tr> <td>N2 Better tool integration</td> <td>38%</td> <td>17%</td> </tr> <tr> <td>N3 Better container/virtualization support</td> <td>37%</td> <td>27%</td> </tr> <tr> <td>N4 Debugging assistance</td> <td>30%</td> <td>30%</td> </tr> <tr> <td>N5 User interfaces for modifying CI configurations</td> <td>29%</td> <td>20%</td> </tr> <tr> <td>N6 Better notifications from CI servers or services</td> <td>22%</td> <td>25%</td> </tr> <tr> <td>N7 Better security and access controls</td> <td>16%</td> <td>32%</td> </tr> </tbody> </table> B7 Lack of tool integration. This barrier is similar to N2 Better tool integration; see section 4.2. B8 Security and access controls. Because CI pipelines have access to the entire source code of a given project, security and access controls are vitally important. For CI pipelines that exist entirely inside of a company firewall, this may not be as much of a concern, but for projects using CI as a service, this can be a major issue. For developers working on company driven open-source projects, this can also be a concern. S9 said: > depending on your project, you may have an open-source project, but secrets living on or near your CI system. Configuring the security and access controls is vital to protecting those secrets. S16, who uses CI as a service, described how their project uses a secure environment variable (SEV) to authenticate via user interfaces, but they also want to be able to commit these configurations to their repository. Our interview participants told us that they would like their CI system to better integrate with other tools. For example, S3 remarked: > It would also be cool if the CI ran more analysis on the code, rather than just the tests. Stuff like Lint, FindBugs, or it could run bug detection tools. There are probably CIs that already do that, but ours doesn’t. Additionally, in our survey responses, participants added the “other” field both technical problems, such as poor interoperability between node.js and Jenkins, as well as non-technical problems, such as “The server team will not install a CI tool for us”. N3 Better container/virtualization support. One core concept in CI is that each build should be done in a clean environment, i.e., it should not depend on the environment containing the output from any previous builds. Participants told us that this was very difficult to achieve before software-based container platforms, e.g., Docker. However, there are still times when the build fails, and in doing so, breaks the CI server. S15 explained: > ...there will be [CI] failures, where we have to go through and manually clean up the environment. S3 had experienced the same issues and had resorted to building Docker containers inside other Docker containers to ensure that everything was cleaned up properly. N4 Debugging assistance. When asked about how they debug test failures detected by their CI, most of our participants told us that they get the output logs and start their search there. These output logs can be quite large in size though, with hundreds of thousands of lines of output, from thousands of tests. This can create quite a challenge when trying to find a specific failure. S7 suggested that they would like their CI server to diff the output from the previous run and hide all the output which remained unchanged. S15, who worked for a large company, had developed an in-house tool to do exactly this, to help developers find errors faster by filtering the output to only show changes from the previous CI run. N5 User interfaces for modifying CI configurations. Many participants described administering their CI tools via configuration scripts. However, participants expressed a desire to make these configuration files editable via a user interface, which they felt would be easier. S3 said: > Most of the stuff we are configuring could go in a UI... We are not modifying heavy logic. We just go in a script and modify some values... So all of the tedious stuff you modify by hand could go into a UI. Additionally, multiple participants also added “Bad UI” as a free-form answer to the question about problems experienced with CI. Developers want to be able to edit their configuration files via user interfaces, but they also want to be able to commit these configurations to their repository. Our interview participants told us they want to commit the configurations, because then when they fork a repository, the CI configurations are included with the new fork as well. N6 Better notifications from CI servers or services. Almost all participants had the ability to setup notifications from their CI server, but very few found them to be useful. When asked about notifications from his CI, S7 said that he will routinely receive up 4.2 Needs We next answer What unmet needs do developers have with CI tools? (RQ2) In addition to describing problems they encounter when using CI, our interview participants also described gaps where CI was not meeting their needs. N1 Easier configuration of CI servers or services. While many CI tools offer a great deal of flexibility in how they can be used, this flexibility can require a large amount of configuration even for a simple workflow. From our interviews, we find that developers for large software companies rely on the CI engineers to ensure that the configuration is correct, and to help instantiate new configurations. Open-source developers often use CI as a service, which allows for a much simpler configuration. However, for developers trying to configure their own CI server, this can be a substantial hurdle. S8, who was running his own CI server, said: > The configuration and setup is costly, in time and effort, and yeah, there is a learning curve, on how to setup Jenkins, and setup the permissions, and the signing of certificates, and all these things. At first, when I didn’t know all these tools, I would have to sort them out, and at the start, you just don’t know... to 20 emails from a single pull request, which he will immediately delete. Other participants did in fact find the notifications useful, though, including S10 who reads through them every morning, to refresh his memory of where he left off the day before. **N7 Better security and access controls.** This need is similar to B8 Security and access controls; see section 4.1. <table> <thead> <tr> <th>Observation</th> </tr> </thead> <tbody> <tr> <td>Developers want CI to be both a highly-configurable platform, and simple to setup and maintain. This creates tension because adding configurability increases complexity, whereas simplification necessarily seeks to reduce complexity.</td> </tr> </tbody> </table> ### 4.3 Motivations We next answer *Why do developers use CI?* (RQ3) We identified developer motivations from the interviews. **M1 CI helps catch bugs earlier.** Preventing the deployment of broken code is a major concern for developers. Finding and fixing bugs in production can be an expensive and stressful endeavor. Kerzazi and Adams [22] reported that 50% of all post-release failures were because of bugs. We would expect that preventing bugs from going into production is a major concern for developers. Indeed, many interview participants said that one of the biggest benefits of CI was that it identifies bugs early on, keeping them out of the production code. For example, S3 said: > [CI] does have a pretty big impact on [catching bugs]. It allows us to find issues even before they get into our main repo, ... rather than letting bugs go unnoticed, for months, and letting users catch them. **M2 Less worry about breaking the build.** Kerzazi et al. [23] reported that for one project, up to 2,300 man-hours were lost over a six month period due to broken builds. Not surprisingly, this was a common theme among interview participants. For instance, S3 discussed how often this happened before CI: > ...and since we didn’t have CI it was a nightmare. We usually tried to synchronize our changes, ... [but] our build used to break two or three times a day. S2 talked about the repercussions of breaking the build: > [When the build breaks], you gotta wait for whoever broke it to fix it. Sometimes they don’t know how, sometimes they left for the day, sometimes they have gone on vacation for a week. There were a lot of points at which all of us, a whole chunk of the dev team was no longer able to be productive. **M3 Providing a common build environment.** One challenge developers face is ensuring that the environment contains all dependencies needed to build the software. By starting the CI process with a clean environment, fetching all the dependencies, and then building the code each time, developers can be assured that they can always build their code. Several developers told us that in their team if the code does not build on the CI server, then the build is considered broken, regardless of how it behaves on an individual developer’s machine. For example, S5 said: > ...If it doesn’t work here (on the CI), it doesn’t matter if it works on your machine. **M4 CI helps projects deploy more often.** Our previous work [19] found that open-source projects that use CI deploy twice as often as projects that do not use CI. In our interviews, developers told us that they feel that CI helped them deploy more often. Additionally, developers told us that CI enabled them to have shorter development cycles than they otherwise would have, even if they did not deploy often for business reasons. For example, S14 said: > [Every two weeks] we merge into master, and consider that releasable. We don’t often release every sprint, because our customer doesn’t want to. Since we are services company, not a products company, it’s up to our customer to decide if they want to release, but we ensure every two weeks our code is releasable if the customer chooses to do so. **M5 CI allows faster iterations.** Participants told us that running CI for every change allows them to quickly identify when the current changeset will break the build, or will cause problems in some other location(s) of the codebase. Having this immediate feedback enables much faster development cycles. This speed allows developers to make large changes quickly, without introducing a large amount of bugs into the codebase. S15 stated: > We were able to run through up to 10 or 15 cycles a day, running through different tests, to find where we were, without solutions needed to be where. Without being able to do that, without that speed, and that feedback, there is no way we could have accomplished releasing the software in the time frame required with the quality we wanted. **M6 CI makes integration easier.** Initially, CI was presented as a way to avoid painful integrations [14]. However, while developers do think CI makes integration easier, it is not the primary reason that motivates developers to use CI. For many developers, they see their VCS as the solution to difficult integrations, not the CI. **M7 Enforcing a specific workflow.** Prior to CI, there was no common way for tools to enforce a specific workflow (e.g., ensuring all tests are run before accepting changes). This is especially a concern for distributed teams, where it is harder to overcome tooling gaps through informal communication channels. However, with CI, not only are all the tests run on every changeset, but everyone knows what the results are. Everyone on the team is aware when a code breaks the tests or the builds, without having to download the code and check the test results on their own machine. This can help find bugs faster and increase team awareness, both of which are important parts of code review [2]. S16 told us that he was pretty sure that before they added CI to their project, contributors were not running the tests routinely. ### Table 4: Developers’ motivation for using CI <table> <thead> <tr> <th>Motivation</th> <th>Broad</th> <th>Focused</th> </tr> </thead> <tbody> <tr> <td>M1 CI helps us catch bugs earlier</td> <td>75%</td> <td>86%</td> </tr> <tr> <td>M2 CI makes us less worried about breaking our builds</td> <td>72%</td> <td>82%</td> </tr> <tr> <td>M3 CI provides a common build environment</td> <td>70%</td> <td>75%</td> </tr> <tr> <td>M4 CI helps us deploy more often</td> <td>68%</td> <td>75%</td> </tr> <tr> <td>M5 CI allows faster iterations</td> <td>57%</td> <td>76%</td> </tr> <tr> <td>M6 CI makes integration easier</td> <td>57%</td> <td>75%</td> </tr> <tr> <td>M7 CI can enforce a specific workflow</td> <td>40%</td> <td>51%</td> </tr> <tr> <td>M8 CI allows testing across multiple platforms</td> <td>29%</td> <td>73%</td> </tr> </tbody> </table> M8 Test across all platforms. CI allows a system to be tested on all major platforms (Windows, Linux, and OS X), without each environment being setup locally by each developer, e.g., S16 stated: We are testing across more platforms now, it is not just OS X and Linux, which is mostly what developers on projects run. That has been useful. Nevertheless, one survey participant responded to our open-ended question at the end of the survey: Simplifying CI across platforms could be easier. We currently want to test for OS X, Linux and Windows and need to have 3 CI services. While this is a benefit already realized for some participants, others see this as an area in which substantial improvements could be made to CI to provide additional support. Developers use CI to guarantee quality, consistency, and visibility across different environments. However, adding and maintaining automated tests causes these benefits to come at the expense of increased time and effort. Observation 4.4 Experiences We next answer the research question What benefits do developers experience using CI? (RQ4) Devanbu et al. [11] found that developers have strongly held beliefs, often based on personal experience more than research results, and that practitioner beliefs should be given due attention. In this section we present developers’ beliefs, gathered from interviews, about using CI. Our results show developers are very positive about the use of CI. E1 Developers believe projects with CI give more value to automated tests. Several participants told us that before using CI, although developers would write unit tests, they often would not be run, and developers did not feel that writing tests was worth the effort. S11 related: Several situations I have been in, there is no CI, but there is a test suite, and there is a vague expectation that someone is running this test sometimes. And if you are the poor schmuck that actually cares about tests, and you are trying to run them, and you can’t get anything to pass, and you don’t know why, and you are hunting around like “does anyone else actually do this?” However, due to the introduction of CI, developers were able to see their tests being run for every changeset, and the whole team becomes aware when the tests catch an error that otherwise would have made it into the product. S16 summarized this feeling: [CI] increases the value of tests, and makes us more likely to write tests, to always have that check in there. [Without CI, developers] are not always going to run the tests locally, or you might not have the time to, if it is a larger suite. E2 Developers believe projects with CI have higher quality tests. Interview participants told us that because projects that use CI run their automated tests more often, and the results are visible to the entire team, this motivates developers to write higher quality tests. Several participants claimed that using CI resulted in higher test coverage, which they equate with higher quality tests. For example, S8 stated: ... We jumped the coverage from a single digit to 50% of the code base in one year. To confirm this, we asked the same question of survey participants. Figure 3 shows that the survey participants overwhelmingly agree that projects with CI have higher quality tests. E3 Developers believe projects that use CI have higher code quality. Developers believe that using CI leads to higher code quality. By writing a good automated test suite, and running it after every change, developers can quickly identify when they make a change that does not behave as anticipated, or breaks some other part of the code. S10 said: CI for me is a very intimate part of my development process. ... I lean on it for confidence in all areas. Essentially, if I don’t have some way of measuring my test coverage, my confidence is low. ... If I don’t have at least one end-to-end test, to make sure it runs as humans expect it to run, my confidence is low. E4 Developers believe projects with CI are more productive. According to our interview participants, CI allows developers to focus more on being productive, and to let the CI take care of boring, repetitive steps, which can be handled by automation. S2 said: We discuss the trade-offs developers face when using CI, the implications of those trade-offs, and the differences between our two surveys. 5.1 CI Trade-Offs As with any technology, developers who use CI should be aware of the trade-offs that arise when using that technology. We will look into three trade-offs that developers should be aware of when using CI: Assurance, Security, and Flexibility. Assurance (Speed vs Certainty): Developers must consider the trade-off between speed and certainty. One of the benefits of CI is that it improves validation of the code (see M1, M2, and M9). However, the certainty that code is correct comes at a price. Building and running all these additional tests causes the CI to slow down, which developers also considered a problem (see B2, M10). Ensuring that their code is correctly tested, but keeping build times manageable, is a trade-off developers must be aware of. Rothermel et al. [34] also identify this trade-off in terms of running tests as a motivation for test prioritization. Security (Access vs Information Security): Information security should be considered by all developers. Developers are concerned about security when using CI (see B8, N7). This is important because a CI pipeline should protect the integrity of the code passing through the pipeline, protect any sensitive information needed during the build and test process (e.g., credentials to a database), as well as protect the machines that are running the CI system. However, limiting access to the CI pipeline conflicts with developers’ need for better access (see B1, N4). During our interviews, developers reported that troubleshooting CI build failures was often difficult because they did not have the same access to code running on a CI system, as they did when running it locally on their own machine. Providing more access may make debugging easier, but poses challenges when trying to ensure the integrity of the CI pipeline. Post and Kagan [32] examine this trade-off for knowledge workers, and found security restrictions hinder a third of workers from being able to perform their jobs. Flexibility (Configuration vs Simplicity): Another trade-off that developers face is between the flexibility and power of highly configurable CI systems, and the ease of use that comes from simplicity. Developers wish to have more flexibility in configuring and using their CI systems (see B4, B7, N2, and N3). More flexibility increases the power of a CI system, while at the same time also increasing its complexity. However, the rising complexity of CI systems is also a concern for developers (see B5, B6, N1, and N5). Developers’ needs for more flexibility directly opposes the desire for more simplicity. Xu et al. [53] examine over-configurable systems and also found that these systems severely hinder usability. 5.2 Implications Each of these three trade-offs leads to direct implications for developers, tool builders, and researchers. Assurance (Speed vs Certainty) Developers should be careful to only write tests that add value to the project. Tests that do not provide value still consume resources every CI build, and slow down the build process. As more tests are written over time, build times trend upward. Teams should schedule time for developers to maintain their test suites, where they can perform tasks such as removing unneeded tests [40], improving the test suite by filling in gaps in coverage, or increasing test quality. Developers face difficult choices about the extent to which each project should be tested, and to what extent they are willing to slow down the build process to achieve that level of testing. Some projects can accept speed reductions because of large, rigorous tests. However, for other projects, it may be better to keep the test run times faster, by only executing some of the tests. While this can be done manually, developers should consider using advanced test selection/minimization approaches [4, 12, 16, 20, 54]. Tool builders can support developers by creating tools that allow developers to easily run subsets of their testing suites [54]. Helping developers perform better test selection can trade some certainty for speed gains. Researchers should investigate the trade-offs between speed and certainty. Are there specific thresholds where the build duration matters more than others? Our results suggest that developers find it important to keep build times under 10 minutes. Researchers should find ways to give the best possible feedback to developers within 10 minutes. Another avenue for researchers is to build upon previous work [13] using test selection and test prioritization to make the CI process more cost effective. Security (Access vs Information Security) Developers should be cognizant of the security concerns that extra access to the CI pipeline introduces. This is especially a concern for developers inside companies where some or all of their code is open source. One interview participant told us that they navigate the dichotomy between security and openness by maintaining both an internal CI server that operates behind their company firewall, and using Travis CI externally. They cannot expose their internal CI due to confidentiality requirements, but they use external CI to... be taken seriously and maintain a positive relationship with the developer community at large. **Tool Builders** should provide developers with the ability to have more access to the build pipeline, without compromising the security of the system. One way of accomplishing this is to provide fine-grained account management with different levels of access, e.g., restricting less trusted accounts to view-only access of build results, and allowing trusted accounts to have full access to build results and management features in the CI system. **Researchers** should explore the security challenges that arise when using CI. Although CI aims at automating and simplifying the testing and validation process, the increased infrastructure provides additional attack vectors that can be exploited. The security implications of CI require more thorough examination by security researchers in particular. Researchers should also examine the feasibility of creating systems that allow developers to safely expose their CI systems without compromising their security. **Flexibility (Configuration vs Simplicity)** **Developers** should recognize that custom development processes bring complexity, increase maintenance costs, installation costs, etc. They should consider adopting convention over configuration if they want to reduce the complexity of their CI system. Developers should strive to keep their processes as simple as possible, to avoid adding unneeded complexity. Developers should consider the long-term costs of highly complex custom CI systems. If the CI becomes overly complex, and the administration is not shared among a team, there is a vulnerability to the overall long-term viability if the maintainer leaves the project. Developers should also consider the long-term maintenance costs when considering adding complexity to their CI pipeline. **Tool Builders** must contend with developers that want expanded UIs for managing the CI pipeline, as well as having the underlying configurations be captured by version-control systems. Tool Builders should create tools that allow for UI changes to configurations, but also output those configurations in simple text files that can be easily included in version control. **Researchers** should collect empirical evidence that helps developers, who wish to reduce complexity by prioritizing convention over configuration, to establish those conventions based on evidence, not on arbitrary decisions. Researchers should develop a series of empirically justified “best practices” for CI processes. Also, researchers should evaluate the claims of developers who strongly believe that CI improves test quality, and that CI makes them more productive. 5.3 **Focused (Pivotal) vs Broad Survey Results** We deployed the Focused Survey at a single company (Pivotal), and the Broad Survey to a large population of developers using social media. After performing both surveys, we discussed the findings with a manager at Pivotal, and these discussions allowed us to develop a deeper understanding of the results. **Flaky Tests** The survey deployed at Pivotal contained 4 additional questions requested by Pivotal. One question asked developers to report the number of CI builds failing each week due to true test failures. Another question asked developers to estimate the number of CI builds failing due to non-deterministic (flaky) tests [27]. Figure 6 shows the reported number of CI build failures because of flaky tests, as well as failures due to true test failures. There was no significant difference between the two distributions (Pearson’s Chi-squared test, $p\text{-value} = 0.48$), suggesting that developers experienced similar numbers of flaky and true CI failures per week. However, for the largest category, >10 fails a week, there were twice as many flaky failures as true failures. When we discussed our findings with the manager at Pivotal, he indicated this was the most surprising finding. He related that at Pivotal, they have a culture of trying to remove flakiness from tests whenever possible. That claim was supported by our survey response, where 97.67% of Pivotal participants reported that when they encounter a flaky test, they fix it. Nevertheless, our participants reported that CI failures at Pivotal were just as likely to be caused by flaky tests as by true test failures. **Build Times** Focused Survey respondents indicated that their CI build times typically take “greater than 60 minutes”. This is in contrast with the “5-10 minutes” average response from respondents in the Broad Survey. This difference can also be observed in the acceptable build time question, in which Focused Survey respondents selected “varies by project” most often compared to the Broad Survey respondents that selected “10 minutes” as the most commonly acceptable build time. Pivotal management promotes the use of CI, and its accompanying automation, for as many aspects of their software development as possible. According to the manager at Pivotal, the difference in responses for actual and acceptable build times can be explained by the belief that adhering to test-driven development results in significantly more unit tests, but for Pivotal, the extra testing is worth the longer CI build times. The manager also suggested that the addition of multiple target platforms in CI builds will also necessarily increase build times. Therefore, at Pivotal, while they seek to reduce those times whenever possible, they accept longer build times when necessary. **Maintenance Costs** Focused Survey respondents reported experiencing “troubleshooting a CI build failure”, “overly long CI build times”, and “maintaining a CI server or service” more often than the Broad Survey respondents. When asked, the manager at Pivotal indicated that they actively promote a culture of process ownership within their development teams, so the developers are responsible for maintaining and configuring the CI services that they use. They also said that the CI systems they use are more powerful and complex than other CI systems, resulting in a more complicated setup, but provides more control over the build process. 6 THREATS TO VALIDITY Reproducibility Can others replicate our results? Qualitative studies in general are very difficult to replicate. We address this threat by conducting interviews, a focused survey at a single company, and a large-scale survey of a broad range of developers. The interview script, code set, survey questions, and raw data can be found on our companion site. We cannot publish the transcripts because we told the interview participants we would not release the transcripts. Construct Are we asking the right questions? To answer our research questions, we used semi-structured interviews [41], which explore themes while also letting participants bring up new ideas throughout the process. By allowing participants to have the freedom to bring up topics, we avoid biasing the interviews with our preconceived ideas of CI. Internal Did we skew the accuracy of our results with how we collected and analyzed information? Interviews and surveys can be affected by bias and inaccurate responses. These could be intentional or unintentional. We gave interviewees gift cards for their participation and offered the survey participants the chance to win a gift card, which could bias our results. To mitigate these concerns, we followed established guidelines in the literature [31, 39, 42] for designing and deploying our survey. We ran iterative pilots for both studies and the surveys, and kept the surveys as short as possible. External Do our results generalize? By interviewing selected developers, it is not possible to understand the entire developer population. To mitigate this, we attempted to recruit as diverse a population as possible, including 14 different companies, and a wide variety of company size and domains. We then validate our responses using the Focused Survey with 51 responses, and the Broad Survey with 523 responses from over 30 countries. Because Pivotal is a company which builds CI tools, the results could be biased in favor of CI. To mitigate this, we widely recruited participants for the Broad Survey. However, because we recruited participants for the Broad Survey by advertising online, our results may be affected by self-selection bias. 7 RELATED WORK Continuous Integration Studies Vasilescu et al. [50] performed a preliminary quantitative study of quality outcomes for open-source projects using CI. Our previous work [19] presented a quantitative study of the costs, benefits, and usage of CI in open-source software. These studies do not examine barriers or needs when using CI, nor do they address the trade-offs developers must contend with. In contrast to these studies, we develop a deep understanding of the the barriers and unmet needs of developers through interviews and surveys. We also discover trade-offs users face when using CI. Debbiche et al. [10] present a case study of challenges faced by a telecommunications company when adopting CI. They present barriers from a specific company, but provide no generalized findings and do not address needs, experiences, or benefits of CI. Other researchers have studied ways to improve CI. Stähl and Bosch [44] study automated software integration, a key building block for CI. Elbaum et al. [13] examined the use of regression test selection techniques to increase the cost-effectiveness in CI. Vos et al. [52] propose running CI tests even after deployment, to check the production code. Muşlu et al. [29] ran tests continuously in the IDE, even more often than in CI. Staples et al. [45] describe Continuous Validation as a potential next step after CI/CD. Other work related to CI and automated testing includes generating acceptance tests from unit tests [21], black-box test prioritization [18], ordering of failed unit tests [17], generating automated tests at runtime [1], and prioritizing acceptance tests [43]. Continuous Delivery Continuous Delivery (CD), the automated deployment of software, is enabled by the use of CI. Olsson et al. [30] performed a case study of four companies transitioning to continuous delivery. They found some similar barriers when transitioned to CD as we find for CI, including automating the build process (B3), lack of support for desired workflow (B4), and lack of tool integration (B7). Leppänen et al. [26] conducted semi-structured interviews with 15 developers to learn more about CD. Their paper does not have any quantitative analysis and does not claim to provide generalized findings. Others have studied CD and MySQL schemas [9], CD at Facebook [37], and the tension between release speed and software quality when doing CD [38]. Developer Studies We perform a study of developers to learn about their barriers, unmet needs, motivations, and experiences. Many other researchers have also studied developers, e.g., to learn how DevOps handles security [49], developers’ debugging needs [25], and how developers examine code history [8]. Automated Testing Previous work has examined the intertwined nature of CI and automated testing. Stolberg [46] and Sumrell [47] both provide experience reports of the effects of automating tests during transitions to CI. Santos and Hindle [36] used Travis CI build status as proxy for code quality. 8 CONCLUSIONS AND FUTURE WORK Software teams use CI for many activities, including to catch errors, make integration easier, and deploy more often. Despite the many benefits of CI, developers still encounter a wide variety of problems with CI. We hope that this paper motivates researchers to tackle the hard problems that developers face with CI. For example, future work should examine the relationship between developers’ desired and actual build times when using CI. Another area that we identified for future work is a deeper analysis into flaky tests. Flaky test identification tools could automatically detect flaky tests to help developers know if CI failures are due to flaky tests or legitimate test failures. CI is here to stay as a development practice, and we need continuous improvement (“CI” of a different kind) of CI to realize its full potential. 9 ACKNOWLEDGMENTS We thank Martin Fowler, Brian Marick, and Joel Spolsky for promoting the Broad Survey, and Matthew Kocher for all the help with the Focused Survey at Pivotal Labs. We also thank Amin Alipour, Andrew Begel, Souti Chattopadhyay, Mihai Codoban, Matt Hammer, Sean McGregor, Cyrus Omar, Anita Sarma, and the anonymous reviewers for their valuable comments on earlier versions of this paper. This research was partially supported by NSF grants CCF-1421503, CCF-1438982, CCF-1439957, and CCF-1553741. REFERENCES [29] Helen Holmström Ölsön, Elva Alahyari, and Jan Bosch. 2012. Climbing the “Stairway to Heaven” – A Multiple-Case Study Exploring Barriers in the Transition from Agile Development towards Continuous Deployment of Software. In Euromicro SEAA. [46] Megan Sumrell. 2007. From Waterfall to Agile – How does a QA Team Transition?. In AGILE.
{"Source-Url": "http://web.engr.oregonstate.edu/~nelsonni/docs/fse17-hilton.pdf", "len_cl100k_base": 11833, "olmocr-version": "0.1.48", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36497, "total-output-tokens": 14444, "length": "2e13", "weborganizer": {"__label__adult": 0.00035691261291503906, "__label__art_design": 0.0002853870391845703, "__label__crime_law": 0.00026345252990722656, "__label__education_jobs": 0.0016918182373046875, "__label__entertainment": 4.458427429199219e-05, "__label__fashion_beauty": 0.00014781951904296875, "__label__finance_business": 0.0003268718719482422, "__label__food_dining": 0.0002548694610595703, "__label__games": 0.0004398822784423828, "__label__hardware": 0.0004341602325439453, "__label__health": 0.0003445148468017578, "__label__history": 0.0001741647720336914, "__label__home_hobbies": 9.262561798095704e-05, "__label__industrial": 0.00023829936981201172, "__label__literature": 0.0002033710479736328, "__label__politics": 0.00020003318786621096, "__label__religion": 0.00032830238342285156, "__label__science_tech": 0.0028228759765625, "__label__social_life": 0.00012111663818359376, "__label__software": 0.0050506591796875, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.0002562999725341797, "__label__transportation": 0.0003364086151123047, "__label__travel": 0.0001728534698486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62365, 0.02705]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62365, 0.20105]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62365, 0.92953]], "google_gemma-3-12b-it_contains_pii": [[0, 4214, false], [4214, 10519, null], [10519, 16738, null], [16738, 19689, null], [19689, 25776, null], [25776, 32482, null], [32482, 36719, null], [36719, 42024, null], [42024, 48210, null], [48210, 54808, null], [54808, 62365, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4214, true], [4214, 10519, null], [10519, 16738, null], [16738, 19689, null], [19689, 25776, null], [25776, 32482, null], [32482, 36719, null], [36719, 42024, null], [42024, 48210, null], [48210, 54808, null], [54808, 62365, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62365, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62365, null]], "pdf_page_numbers": [[0, 4214, 1], [4214, 10519, 2], [10519, 16738, 3], [16738, 19689, 4], [19689, 25776, 5], [25776, 32482, 6], [32482, 36719, 7], [36719, 42024, 8], [42024, 48210, 9], [48210, 54808, 10], [54808, 62365, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62365, 0.13423]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
9b8eb0ea390e6c4c279f47a5787b940079e890e3
AVOIDING THE TOP 10 SOFTWARE SECURITY DESIGN FLAWS Iván Arce, Kathleen Clark-Fisher, Neil Daswani, Jim DelGrosso, Danny Dhillon, Christoph Kern, Tadayoshi Kohno, Carl Landwehr, Gary McGraw, Brook Schoenfield, Margo Seltzer, Diomidis Spinellis, Izar Tarandach, and Jacob West # CONTENTS <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Introduction</td> <td>5</td> </tr> <tr> <td>Mission Statement</td> <td>6</td> </tr> <tr> <td>Preamble</td> <td>7</td> </tr> <tr> <td>Earn or Give, but Never Assume, Trust</td> <td>9</td> </tr> <tr> <td>Use an Authentication Mechanism that Cannot be Bypassed or Tampered With</td> <td>11</td> </tr> <tr> <td>Authorize after You Authenticate</td> <td>13</td> </tr> <tr> <td>Strictly Separate Data and Control Instructions, and Never Process Control Instructions Received from Untrusted Sources</td> <td>14</td> </tr> <tr> <td>Define an Approach that Ensures all Data are Explicitly Validated</td> <td>16</td> </tr> <tr> <td>Use Cryptography Correctly</td> <td>19</td> </tr> <tr> <td>Identify Sensitive Data and How They Should Be Handled</td> <td>21</td> </tr> <tr> <td>Always Consider the Users</td> <td>22</td> </tr> <tr> <td>Understand How Integrating External Components Changes Your Attack Surface</td> <td>25</td> </tr> <tr> <td>Be Flexible When Considering Future Changes to Objects and Actors</td> <td>28</td> </tr> <tr> <td>Get Involved</td> <td>31</td> </tr> </tbody> </table> IEEE Computer Society Center for Secure Design Participants Iván Arce, Sadosky Foundation Neil Daswani, Twitter Jim DelGrosso, Cigital Danny Dhillon, RSA Christoph Kern, Google Tadayoshi Kohno, University of Washington Carl Landwehr, George Washington University Gary McGraw, Cigital Brook Schoenfield, McAfee, Part of Intel Security Group Margo Seltzer, Harvard University Diomidis Spinellis, Athens University of Economics and Business Izar Tarandach, EMC Jacob West, HP Staff Kathleen Clark-Fisher, Manager, New Initiative Development Jennie Zhu-Mai, Designer Public Access Encouraged Because the authors, contributors, and publisher are eager to engage the broader community in open discussion, analysis, and debate regarding a vital issue of common interest, this document is distributed under a Creative Commons BY-SA license. The full legal language of the BY-SA license is available here: http://creativecommons.org/licenses/by-sa/3.0/legalcode. Under this license, you are free to both share (copy and redistribute the material in any medium or format) and adapt (remix, transform, and build upon the material for any purpose) the content of this document, as long as you comply with the following terms: Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may use any reasonable citation format, but the attribution may not suggest that the authors or publisher has a relationship with you or endorses you or your use. “ShareAlike” — If you remix, transform, or build upon the material, you must distribute your contributions under the same BY-SA license as the original. That means you may not add any restrictions beyond those stated in the license, or apply legal terms or technological measures that legally restrict others from doing anything the license permits. Please note that no warranties are given regarding the content of this document. Derogatory use of the content of this license to portray the authors, contributors, or publisher in a negative light may cancel the license under Section 4(a). This license may not give you all of the permissions necessary for a specific intended use. About the IEEE Computer Society The IEEE Computer Society is the world’s leading computing membership organization and the trusted information and career-development source for a global workforce of technology leaders. The Computer Society provides a wide range of forums for top minds to come together, including technical conferences, publications, and a comprehensive digital library, unique training webinars, professional training, and the TechLeader Training Partner Program to help organizations increase their staff’s technical knowledge and expertise. To find out more about the community for technology leaders, visit http://www.computer.org. Published by the IEEE Computer Society. INTRODUCTION Most software that has been built and released typically comes with a set of defects—implementation bugs and design flaws. To date, there has been a larger focus on finding implementation bugs rather than on identifying flaws. In 2014, the IEEE Computer Society, the leading association for computing professionals, launched a cybersecurity initiative with the aim of expanding and escalating its ongoing involvement in the field of cybersecurity. The first step for the initiative was to launch the IEEE Computer Society Center for Secure Design. The Center intends to shift some of the focus in security from finding bugs to identifying common design flaws in the hope that software architects can learn from others’ mistakes. To achieve this goal, the Center brought people together from different organizations at a workshop in early 2014. At the workshop, participants discussed the types of flaws they either identified in their own internal design reviews, or that were available from external data. They arrived at a list they felt were the top security design flaws. Many of the flaws that made the list have been well known for decades, but continue to persist. In this document is the result of that discussion—and how to avoid the top 10 security flaws. MISSION STATEMENT The IEEE Computer Society’s Center for Secure Design (CSD) will gather software security expertise from industry, academia, and government. The CSD provides guidance on: • Recognizing software system designs that are likely vulnerable to compromise. • Designing and building software systems with strong, identifiable security properties. The CSD is part of the IEEE Computer Society’s larger cybersecurity initiative, launched in 2014. PREAMBLE The goal of a secure design is to enable a system that supports and enforces the necessary authentication, authorization, confidentiality, data integrity, accountability, availability, and non-repudiation requirements, even when the system is under attack. While a system may always have implementation defects or “bugs,” we have found that the security of many systems is breached due to design flaws or “flaws.” We believe that if organizations design secure systems, which avoid such flaws, they can significantly reduce the number and impact of security breaches. While bugs and flaws are both different types of defects, we believe there has been quite a bit more focus on common bug types than there has been on secure design and the avoidance of flaws. Before we discuss our contribution in this document, we briefly discuss the differences between bugs and flaws. Both bugs and flaws are types of defects. A defect may lie dormant in software for years only to surface in a fielded system with major consequences. A bug is an implementation-level software problem. Bugs may exist in code but never be executed. A flaw, by contrast, is a problem at a deeper level. Flaws are often much more subtle than simply an off-by-one error in an array reference or use of an incorrect system call. A flaw might be instantiated in software code, but it is the result of a mistake or oversight at the design level. For example, a number of classic flaws exist in error-handling and recovery systems that fail in an insecure or inefficient fashion. In this document, a group of software security professionals have contributed both real-world data and expertise to identify some of the most significant design flaws that have led to security breaches over the past several years. The list of issues presented here is focused entirely on the most widely and frequently occurring design flaws as compiled from data provided by the member organizations of the IEEE Computer Society Center for Secure Design (CSD). EARN OR GIVE, BUT NEVER ASSUME, TRUST Software systems comprising more than just a single monolithic component rely on the composition and cooperation of two or more software tiers or components to successfully accomplish their purpose. These designs often depend on the correct functioning of the existing parts. They will be inherently insecure if any of those parts are run in a potentially hostile environment, such as a user’s desktop computer, an unmanaged device, or a runtime or sandbox that can be tampered with by an attacker. Offloading security functions from server to client exposes those functions to a much less trustworthy environment, which is one of the most common causes of security failures predicated on misplaced trust. Designs that place authorization, access control, enforcement of security policy, or embedded sensitive data in client software thinking that it won’t be discovered, modified, or exposed by clever users or malicious attackers are inherently weak. Such designs will often lead to compromises. Classic examples of software where trust is misplaced include a web browser or a thick-client application, but there are many more examples of client software. They include applications running on a mobile device, or embedded software that might be found in modern automobiles, pacemakers, gaming systems, or home appliances. Even calls into your APIs from business partners could be considered client software in some sense. When untrusted clients send data to your system or perform a computation on its behalf, the data sent must be assumed to be compromised until proven otherwise. In some cases you may be able to guarantee that the client is, indeed, who it attests it is, or that the business logic it contains has not been altered or circumvented, or that external factors have not influenced the integrity of the computations it performed. But these situations are not the rule, and these underlying assumptions can change when new vulnerabilities are discovered. It is safer in the long run to design a software system under the assumption that components running on any platform whose integrity can’t be attested are inherently not trustable, and are therefore unsuitable for performing security sensitive tasks. If, nonetheless, security operations must be offloaded to components running on an untrusted platform, the design should impose extreme caution on how the computation and its output are treated. Common weaknesses related to client trust reside in various parts of the system, but tend to share a sensibility. A designer might (incorrectly) assume that server APIs will always be called in the same order every time. He or she might believe that the user interface is always able to restrict what the user is able to send to the server. He could try to build the business logic solely on the client side, or attempt to actually store a secret in the client. And, of course, a designer can run into danger by thinking that any intellectual property (IP) sent to the client can be protected through technical means. Though security-aware development strategies cannot eliminate all these problems (or even resolve conflicts in goals for the software being developed), there are useful ways to minimize the potential risks. For example, some organizations will claim a real business need to store intellectual property or other sensitive material on the client. The first consideration is to confirm that sensitive material really does need to be stored on the client. When it truly is necessary to do so, various binary protection mechanisms can delay the leaking of sensitive material. Possible techniques to consider include obfuscation or anti-debugging (although the strength of these protections vary widely, so designers should understand the level of protection actually achieved with each tool or technique). Subject matter experts should be consulted if the system requires a client component with a level of protection that cannot be trivially compromised. If IP or sensitive material must be stored or sent to the client, the system should be designed to be able to cope with potential compromise. For instance, the same shared secret or other cryptographic material shouldn’t be used on all the clients. Make the validity of what is offloaded to the client limited in time, set expiration dates for data stored in the client, watermark IP, and double-check client computations that are security sensitive. On a related note, design your system to work in a limited fashion even when one or many clients have been completely compromised. Finally, make sure all data received from an untrusted client are properly validated before processing. Follow the guidance described in the “Define an Approach that Ensures All Data Are Explicitly Validated” section. When designing your systems, be sure to consider the context where code will be executed, where data will go, and where data entering your system comes from. Failing to consider these things will expose you to vulnerabilities associated with trusting components that have not earned that trust. Authentication is the act of validating an entity’s identity. One goal of a secure design is to prevent an entity (user, attacker, or in general a “principal”) from gaining access to a system or service without first authenticating. Once a user has been authenticated, a securely designed system should also prevent that user from changing identity without re-authentication. Authentication techniques require one or more factors such as: something you know (e.g., a password), something you are (e.g., biometrics such as fingerprints), or something you have (e.g., a smartphone). Multi-factor (sometimes referred to as N-factor) authentication refers to the technique of requiring multiple distinct factors to prove your identity. Authentication via a cookie stored on a browser client may be sufficient for some resources; stronger forms of authentication (e.g., a two-factor method) should be used for more sensitive functions, such as resetting a password. In general, a system should consider the strength of the authentication a user has provided before taking action. Note also that authentication encompasses more than just human-computer interaction; often, in large distributed systems, machines (and/or programs running on those machines) authenticate themselves to other machines. The ability to bypass an authentication mechanism can result in an unauthorized entity having access to a system or service that it shouldn’t. For example, a system that has an authentication mechanism, but allows a user to access the service by navigating directly to an “obscure” URL (such as a URL that is not directly linked to in a user interface, or that is simply otherwise “unknown” because a developer has not widely published it) within the service without also requiring an authentication credential, is vulnerable to authentication bypass. The use of authentication techniques that don’t fall into the category of something you know, something you are, or something you have may also allow users to access a system or service they shouldn’t. System designers should beware of authentication techniques that depend on assumptions about sole possession of resources that may actually be shared. For example, authentication mechanisms that identify a user by their IP address wouldn’t be useful if the addresses were shared among different users at different times; for instance, via an address-sharing/configuration protocol such as DHCP. Even when IP addresses are tied to particular devices, authentication based on device addresses is not a substitute for user authentication, as IP addresses can be spoofed and are not necessarily associated with specific users for a long time. As another concrete illustration, authentication mechanisms that rely on a computer’s MAC address, which can easily be changed or spoofed, can result in unauthorized access if the device assumed to be identified with that individual is lost or stolen. Typically, the act of authentication results in the creation of a token, capability (as often referred to in operating systems literature), or ticket representing a principal that is used throughout the system or service. If such tokens (or credentials) are deterministically derived from easy-to-obtain information, such as a user name, then it becomes possible to forge identities, allowing users to impersonate other users. Credentials must not be easy to forge. Upon successful authentication, the user may be provided with an authentication credential, token, or ticket, which can be provided back to the system so that the user does not need to be re-authenticated for every request or transaction made via the system. At the same time, if it is possible for an attacker to forge the authentication credential, token, or ticket, the attacker can bypass the authentication mechanism. System designers can reuse time-tested authentication mechanisms such as Kerberos instead of building a new one. Alternatively, system designers are encouraged to use cryptography correctly (see the corresponding “Using Cryptography Correctly” section later in this document) in constructing authentication credentials, tokens, and tickets. If an authentication system does not limit the lifetime of an authentication interaction, then it may inadvertently grant access to a user to whom it should not. For example, imagine a user who logs into a public terminal and then walks away without logging out (which should terminate the session). A second user using the public terminal might now be able to use the system or service as the first user. A properly designed authentication system may automatically log the user out after a period of inactivity. Authentication system designs should automatically provide a mechanism requiring re-authentication after a period of inactivity or prior to critical operations. As an example, upon receiving a transaction request to conduct certain sensitive actions such as changing a password, or transferring funds to another financial institution, a system could ask the user to re-enter their existing password again to confirm their transaction request, even though the user may already be authenticated. The design of a system’s re-authentication scheme, and when and how often to ask a user to re-enter their password, needs to be mindful of not only security, but also usability and convenience. Asking users to frequently re-enter their password can be damaging to security, as it trains people’s muscle memory to enter their password every time they see a prompt and sets them up as easy phishing targets. By far the most common authentication mechanism remains the password. Using passwords requires that the system or service have a mechanism to associate a given password with a particular user. If this information is not properly stored, it may be possible for agents other than the user to obtain access to them. Storing such information securely is non-trivial, and the reader is referred to the use of an applied cryptography expert as noted in the “Using Cryptography Correctly” section for guidance. Just as it is advisable to reuse tried and tested cryptographic algorithms, it is also advisable to re-use already built and tested password management systems instead of building new ones. It’s preferable to have a single method, component, or system responsible for authenticating users. Such a single mechanism can serve as a logical “choke point” that cannot be bypassed. Much as in code reuse, once a single mechanism has been determined to be correct, it makes sense to leverage it for all authentication. To summarize, authentication mechanisms are critical to secure designs. They can be susceptible to various forms of tampering and may potentially be bypassed if not designed correctly. We recommend that a single authentication mechanism leverage one or more factors as per an application’s requirements, that it serve as a “choke point” to avoid potential bypass, and that authentication credentials have limited lifetimes, be unforgeable, and be stored so that if the stored form is stolen, they cannot easily be used by the thief to pose as legitimate users. While it is extremely important to assess a user’s identity prior to allowing them to use some systems or conduct certain actions, knowing the user’s identity may not be sufficient before deciding to allow or disallow the user to perform certain actions. For instance, once an automatic teller machine (ATM) authenticates a user via something they have (a debit card), and something they know (a PIN), that does not necessarily mean that user is allowed to withdraw an arbitrary amount of cash from their account. Most users may be authorized to withdraw up to a certain limit per day, or to conduct certain actions (view balance) but not others (transfer funds outside the bank) from the ATM. Authorization should be conducted as an explicit check, and as necessary even after an initial authentication has been completed. Authorization depends not only on the privileges associated with an authenticated user, but also on the context of the request. The time of the request and the location of the requesting user may both need to be taken into account. Sometimes a user’s authorization for a system or service needs to be revoked, for example, when an employee leaves a company. If the authorization mechanism fails to allow for such revocation, the system is vulnerable to abuse by authenticated users exercising out-of-date authorizations. For particularly sensitive operations, authorization may need to invoke authentication. Although authorization begins only after authentication has occurred, this requirement is not circular. Authentication is not binary—users may be required to present minimal (such as a password) or more substantial (e.g. biometric or token-based) evidence of their identity, and authentication in most systems is not continuous—a user may authenticate, but walk away from the device or hand it to someone else. Hence authorization of a specially sensitive operation (for example, transferring a sum of money larger than a designated threshold) may require a re-authentication or a higher level of authentication. Some policies require two people to authorize critical transactions (“two-person rule”). In such cases, it is important to assure that the two individuals are indeed distinct; authentication by password is insufficient for this purpose. Finally, just as a common infrastructure (e.g., system library or back end) should be responsible for authenticating users, so too should common infrastructure be re-used for conducting authorization checks. Co-mingling data and control instructions in a single entity, especially a string, can lead to injection vulnerabilities. Lack of strict separation between data and code often leads to untrusted data controlling the execution flow of a software system. This is a general problem that manifests itself at several abstraction layers, from low-level machine instructions and hardware support to high-level virtual machine interpreters and application programming interfaces (APIs) that consume domain-specific language expressions. “At lower layers, lack of strict segregation between data and control instructions can manifest itself in memory-corruption vulnerabilities, which in turn may permit attacker-controlled modifications of control flow or direct execution of attacker-controlled data as machine or byte-code instructions.” At higher levels, co-mingling of control and data often occurs in the context of runtime interpretation of both domain-specific and general-purpose programming languages. In many languages, control instructions and data are often segregated using in-band syntactic constructs, such as quoting and escaping. If software assembles a string in a parseable language by combining untrusted data with trusted control instructions, injection vulnerabilities arise if the untrusted data are insufficiently validated or escaped. In that situation, an attacker may be able to supply data crafted such that when the resulting expression is processed, parts of the data are parsed and interpreted as control (rather than uninterpreted data, as intended). Experience has shown that use of injection-prone APIs incurs significant risk that injection vulnerabilities will indeed be introduced. Examples of such vulnerabilities include SQL query injection, cross-site JavaScript injection, and shell command injection. At lower levels, software platforms can utilize hardware capabilities to enforce separation of code and data. For example, memory access permissions can be used to mark memory that contains only data as non-executable and to mark memory where code is stored as executable, but immutable, at runtime. Modern operating systems take advantage of such hardware features to implement security mechanisms that harden the entire software stack against multiple forms of attack. Software designs that ignore the principle of strict separation between data and code, or that blur the line that distinguishes one from the other, are inherently... less secure because they undermine or directly invalidate low-level security mechanisms. When designing languages, compilers, virtual machines, parsers and related pieces of infrastructure, consider control-flow integrity and segregation of control and potentially untrusted data as important design goals. When designing APIs (both general-purpose or public interfaces as well as those that are domain- or application-specific), avoid exposing methods or endpoints that consume strings in languages that embed both control and data. Prefer instead to expose, for example, methods or endpoints that consume structured types that impose strict segregation between data and control information. When designing applications that rely on existing APIs, avoid APIs that mingle data and control information in their parameters, especially when those parameters are strings. If there is no choice in underlying APIs (for example, if the use of a relational database requires interfacing through a SQL query API), it is often desirable to encapsulate the injection-prone interface and expose its functionality to application code through a higher-level API that enforces strict segregation between control statements and potentially untrusted data. A design that relies on the ability to transform data into code should take special care to validate the data as fully as possible and to strictly constrain the set of computations that can be performed using data as an input language. Specific areas of concern include the eval function, query languages, and exposed reflection. **Eval.** Many interpreted languages (such as Python, Ruby, and JavaScript) have an eval function that consumes a string consisting of syntax in that language and invokes the language’s interpreter on the string. Use of a language’s eval facility can permit the implementation of very powerful features with little code, and is therefore tempting. It is also very dangerous. If attackers can influence even part of a string that is evaluated and that substring is not appropriately validated or encoded, they can often execute arbitrary code as a result. **Query languages.** Ensuring that appropriate validation or escaping is consistently applied in all code that interfaces with the query API is a difficult and error-prone process; implementing that functionality repeatedly increases the risk of injection vulnerabilities. Use or develop an API that mediates between application code and raw query-language based interfaces (such as SQL, LDAP) and exposes a safer API. Avoid code that constructs queries based on ad-hoc string concatenation of fixed query stanzas with potentially untrusted data. **Exposed reflection.** Many programming languages provide facilities that allow programs to reflectively inspect and manipulate objects, as well as to invoke methods on objects. Use of reflection can be very powerful, and often permits the implementation of complex features using minimal code. For example, implementations of object serializers and deserializers used to marshal and unmarshal in-memory objects into and from a serialized form for persistence or network transfer can often be implemented very effectively using reflection. However, as with eval, use of reflection can be a risky design choice. Unless inputs processed with reflection are very carefully controlled, bugs can arise that may permit the attacker to execute arbitrary code in the receiving process. It is often preferable to consider alternative, safer designs. For example, consider a design based on code-generation: a code-generated, reflection-free object serializer/deserializer is restricted to behaviors allowed by the explicitly generated code. This code is in turn generated at build/compile-time, where the code-generation process cannot be influenced by malicious inputs. DEFINE AN APPROACH THAT ENSURES ALL DATA ARE EXPLICITLY VALIDATED Software systems and components commonly make assumptions about data they operate on. It is important to explicitly ensure that such assumptions hold: vulnerabilities frequently arise from implicit assumptions about data, which can be exploited if an attacker can subvert and invalidate these assumptions. As such, it is important to design software systems to ensure that comprehensive data validation actually takes place and that all assumptions about data have been validated when they are used. It is furthermore desirable to design software to make it feasible for a security reviewer to effectively and efficiently reason about and verify the correctness and comprehensiveness of data validation. Designing for verifiability should take into account that code typically evolves over time, resulting in the risk that gaps in data validation are introduced in later stages of the software life-cycle. **Design or use centralized validation mechanisms** to ensure that all data entering a system (from the outside) or major component (from another component of the same system) are appropriately validated. For example: - It is desirable for web applications to utilize a mechanism (such as a request filter or interceptor facility provided by the underlying web application framework) to centrally intercept all incoming requests, and to apply basic input validation to all request parameters. - Implementations of communication protocols might centrally validate all fields of all received protocol messages before any actual processing takes place. - Systems consuming complex data formats (such as XML documents, image file formats, or word processing file formats) might perform parsing, syntactic validation, and semantic validation of input files in a dedicated validation module whose output is a validated internal object representation of the input document. Parsers and validators must themselves be designed to robustly cope with potentially malicious or malformed inputs. **Transform data into a canonical form**, before performing actual syntactic or semantic validation. This ensures that validation cannot be bypassed by supplying inputs that are encoded in a transport encoding, or in a possibly invalid non-canonical form. Use common libraries of validation primitives, such as predicates that recognize well-formed email addresses, URLs, and so forth. This ensures that all validation of different instances of the same type of data applies consistent validation semantics. Consistent use of common validation predicates can also increase the fidelity of static analysis. Validation should be based on a whitelisting approach, rather than blacklisting. Input validation requirements are often state-dependent. For instance, in a stateful protocol, the set of valid values of a particular protocol message field (and hence the corresponding validation requirements) may depend on the protocol’s state. In such scenarios, it can be beneficial to design the protocol implementation’s input validation component to be itself state-aware. Explicitly re-validate assumptions “nearby” code that relies on them. For example, the entry points of a web application’s business-logic layer should explicitly re-state, and check as preconditions, all assumptions that it relies on. Liberal use of precondition checks in the entry points of software modules and components is highly recommended. Such precondition checks should never fail during execution of the deployed application, assuming the higher layers of the application have correctly validated external inputs. And as such, it is unnecessary for the business-logic layer to produce friendly error messages should such a precondition fail. Nevertheless, re-validation of data supplied to the business-logic layer provides two benefits: - It protects against vulnerabilities that arise from insufficient input validation in a higher layer (since the developer of the higher layer may not have a full understanding of all the requirements and assumptions of the lower layer), or from additional data-flows that were not considered during the initial security design (e.g., a data-load job that calls the business layer with data read from a file format used to exchange information between affiliated organizations, and which does not perform the same level of data validation as the web front end, based on the possibly invalid assumption that such files are “trusted”). - It permits local reasoning about the correctness of a component; since assumptions are explicitly checked and stated, a human reviewer or static analysis tool can truly assume the assumptions actually hold, without having to consider all (possibly very complex) data flows into the component. Use implementation-language-level types to capture assumptions about data validity. For example, an application that receives as an input a date and time in string representation should validate that this input indeed consists of a well-formed string representation of a date and time (for example, in ISO 8601 format). It is desirable to implement validation by parsing the input into a typed representation (such as a “date and time” type provided in many programming language’s standard libraries), and to use that typed representation (and not the original input string) throughout the program. Downstream components are then relieved from having to consider the possibility that a provided value (such as a date) is syntactically invalid, and can focus on only checking additional preconditions that are not supported by the type’s contract (e.g., that a date is not in the future). Various problems arise from failure to address this security design principle. - Injection vulnerabilities can arise if untrusted data are used without validation in certain contexts, such as APIs and platform features that process and interpret strings with certain semantics. For example: - Using an externally controlled string as a component of a file path can lead to path traversal vulnerabilities, unless the application validates that the input represents a single path component (and, in particular, does not contain path separators). • If an externally controlled string is used in a context in a HTML document where it will be interpreted as a URL, a Cross-Site Scripting (XSS) vulnerability can arise unless it has been validated that the string represents a well-formed URL with a benign scheme (such as http://https:, and, in particular, not javascript:, vbscript:, data:, or others). • It is generally preferable to perform data validation relevant to the prevention of injection vulnerabilities in the implementation of the API that is subject to injection vulnerabilities, or in a wrapper API in case the underlying API cannot be modified. See also “Strictly Separate Data and Control Instructions, and Never Process Control Instructions Received from Untrusted Sources” section. • Attempting to validate data that are not in canonical form can allow validation to be bypassed. For example, it is difficult to validate that an input string represents a single path component (free of path separator characters) unless the input has been fully decoded (with respect to transport encodings) and has been validated to be in a canonical character encoding—otherwise, it might be possible for an attacker to sneak a path separator past the input validation by representing it in an encoded form (for example, %-encoding commonly used in web applications), or in the form of a non-canonical character encoding (for example, a non-canonical UTF-8 encoding). • In applications implemented in non-memory safe languages such as C, failing to carefully validate external inputs can result in memory corruption vulnerabilities such as buffer overflows, unbounded memory reads, null-terminated string issues, and so on. • Accepting inputs from untrusted sources without enforcement of an upper bound on data size can result in resource exhaustion. • In general, aside from memory corruption and resource exhaustion issues, data that are not validated cause security issues primarily when they are used in a way that influences control flow. Data that are simply being copied around (e.g., received from an external input, then stored in a database, and later displayed in UI) are generally harmless. Problems arise if the application inspects the data and makes control flow decisions based on the data's value. This most immediately applies to data that are used in contexts where they are interpreted as instructions or control, leading to injection vulnerabilities as discussed earlier. More generally however, control-flow dependencies on untrusted, non-validated data can lead to state corruption vulnerabilities, or execution of state transitions that the programmer did not intend or consider. Typically, security vulnerabilities in this category are highly domain- and application-specific, and hence are difficult to reason about and detect by general-purpose tools. Careful, state-dependent validation of inputs can go a long way toward mitigating this risk. Cryptography is one of the most important tools for building secure systems. Through the proper use of cryptography, one can ensure the confidentiality of data, protect data from unauthorized modification, and authenticate the source of data. Cryptography can also enable many other security goals as well. Cryptography, however, is not a panacea. Getting cryptography right is extremely hard. We list common pitfalls. • **Rolling your own cryptographic algorithms or implementations.** Designing a cryptographic algorithm (including protocols and modes) requires significant and rare mathematical skills and training, and even trained mathematicians sometimes produce algorithms that have subtle problems. There are also numerous subtleties with implementing cryptographic algorithms. For example, the order of operations involved when exponentiating a number—something common in cryptographic operations—can leak secret information to attackers. Standard algorithms and libraries are preferable. • **Misuse of libraries and algorithms.** Even when using strong libraries, do not assume that just using the libraries will be sufficient. There have been numerous instances in which standard libraries were used, but the developers using the libraries made incorrect assumptions about how to use the library routines. In other situations, developers don’t choose the right algorithm or use the algorithm incorrectly. For example, an encryption scheme may protect the confidentiality of data, but may not protect against malicious modifications to the data. As another example, if an algorithm requires an initialization vector (IV), then choosing an IV with certain properties may be required for the algorithm to work securely. Understanding the nuances of algorithm and library usage is a core skill for applied cryptographers. • **Poor key management.** When everything else is done correctly, the security of the cryptographic system still hinges on the protection of the cryptographic keys. Key management mistakes are common, and include hard-coding keys into software (often observed in embedded devices and application software), failure to allow for the revocation and/or rotation of keys, use of cryptographic keys that are weak (such as keys that are too short or that are predictable), and weak key distribution mechanisms. • **Randomness that is not random.** Confusion between statistical randomness and cryptographic randomness is common. Cryptographic operations require random numbers that have strong security properties. In addition to obtaining numbers with strong cryptographic randomness properties, care must be taken not to re-use the random numbers. • **Failure to centralize cryptography.** Numerous situations have been observed in which different teams within an organization each implemented their own cryptographic routines. Cryptographic algorithms often don’t interact nicely. Best practices indicate getting it “right” once and reusing the component elsewhere. • **Failure to allow for algorithm adaptation and evolution.** For more on this, please see “Design for changes in the security properties of components beyond your control” in the “Be Flexible When Considering Future Changes to Objects and Actors” section. Cryptography is so hard to get right that it **always** makes sense to work with an expert if you can. Note that expertise in applied cryptography is not the same as being a mathematician and having a mathematical understanding of cryptography. At the highest level, make use of proven algorithms and libraries, but realize that just the use of such things does not guarantee security—it is easy to accidentally misuse these things. Have a cryptography expert work with your designers to provide an API abstraction around a strong library, so that your developers are not making decisions on algorithms and cipher modes, and so that if you need to change algorithms behind that abstraction layer, you can. IDENTIFY SENSITIVE DATA AND HOW THEY SHOULD BE HANDLED Data are critical to organizations and to users. One of the first tasks that systems designers must do is identify sensitive data and determine how to protect it appropriately. Many deployed systems over the years have failed to protect data appropriately. This can happen when designers fail to identify data as sensitive, or when designers do not identify all the ways in which data could be manipulated or exposed. Data sensitivity is context-sensitive. It depends on many factors, including regulation (which is often mandatory), company policy, contractual obligations, and user expectation. Note that sensitive data are not always user-generated input. Rather, they include data computed from scratch, data coming from external sensors (for example, geolocation and accelerometer data on mobile devices), cryptographic material, and Personally Identifiable Information (PII). Creating a policy that explicitly identifies different levels of classification is the first step in handling data appropriately. It is important to factor all relevant considerations into the design of a data sensitivity policy. For example, there are numerous regulations that system designers must consider, ultimately creating a unified approach that consistently addresses them all. A number of examples may help to flesh this out: various jurisdictions impose regulations on how personal data should be handled (such as medical records); the EU Data Protection Directive differs from the regulations in the United States; and PCI compliance issues, though not regulatory, directly affect data protection requirements. Not all data protection requirements are the same. For some data, confidentiality is critical. Examples include financial records and corporate intellectual property. For data on which business continuity or life depends (for example, medical data), availability is critical. In other cases, integrity is most important. Spoofing or substituting data to cause a system to misbehave intentionally are examples of failures to ensure data integrity. Do not conflate confidentiality alone with data protection. Technical data sensitivity controls that a designer might consider include access control mechanisms (including file protection mechanisms, memory protection mechanisms, and database protection mechanisms), cryptography to preserve data confidentiality or integrity, and redundancy and backups to preserve data availability. Data sets do not exist only at rest, but in transit between components within a single system and between organizations. As data sets transit between systems, they may cross multiple trust boundaries. Identifying these boundaries and rectifying them with data protection policies is an essential design activity. Trust is just as tricky as data sensitivity, and the notion of trust enclaves is likely to dominate security conversations in the next decade. Policy requirements and data sensitivity can change over time as the business climate evolves, as regulatory regimes change, as systems become increasingly interconnected, and as new data sources are incorporated into a system. Regularly revisiting and revising data protection policies and their design implications is essential. Almost every software system in existence today interacts in one way or another with human beings. The users of a software system range from those in charge of fielding, configuring, and maintaining it operationally to those who actually use it for its intended purpose, the system’s end users. The security stance of a software system is inextricably linked to what its users do with it. It is therefore very important that all security-related mechanisms are designed in a manner that makes it easy to deploy, configure, use, and update the system securely. Remember, security is not a feature that can simply be added to a software system, but rather a property emerging from how the system was built and is operated. The way each user interacts with software is dictated not only by the design and implementation decisions of its creators but also by the cognitive abilities and cultural background of its users. Consequently, it is important that software designers and architects consider how the physical abilities, cultural biases, habits, and idiosyncrasies of the intended users of the system will impact its overall security stance. It is also a truism that during the life of any moderately useful system, a few users will discover capabilities that are outside the intentions of the system’s designers and builders. Some of those capabilities may very well have significant security implications. Usability and user experience considerations are often the most important factors ensuring that software systems operate in a secure manner. Designing systems that can be configured and used in a secure manner with easy-to-use, intuitive interfaces and sufficiently expressive, but not excessive, security controls is crucial. However, it is dangerous to assume that every intended user of the system will be interested in security—or will even be well-meaning. The challenge to designers and architects lies in creating designs that facilitate secure configuration and use by those interested in doing so, designs that motivate and incentivize secure use among those not particularly interested in software security, and designs that prevent or mitigate abuse from those who intend to weaken or compromise the system. Failing to address this design principle can lead to a number of problems: - Privilege escalation may result from a failure to implement an authorization model that is sufficiently tied to the authenticated entity (user) in all cases. Escalation failures may also occur when higher-privileged functions are not protected by the authorization model and where assumptions about inaccessibility are incorrect. - A particular failure of appropriate authorization can allow a breach of the intended authorization and isolation between users such that one user may access another user’s data. - When designers don’t “remember the user” in their software design, inadvertent disclosures by the user may take place. If it is difficult to understand the authorization model, or difficult to understand the configuration for visibility of data, then the user’s data are likely to be unintentionally disclosed. - Default configurations that are “open” (that is, default configurations that allow access to the system or data while the system is being configured or on the first run) assume that the first user is sophisticated enough to understand that other protections must be in place while the system is configured. Assumptions about the sophistication or security knowledge of users are bound to be incorrect some percentage of the time. This is particularly true at the startup and initialization of the system. - If the security configuration is difficult or non-intuitive, the result will be an inability to configure the product to conform to the required security policy. - Designers sometimes fail to account for the fact that authenticated and properly authorized users can also be attackers! This design error is a failure to distrust the user, resulting in authorized users having opportunities to misuse the system. - When security is too hard to set up for a large population of the system’s users, it will never be configured, or it will not be configured properly. This is especially dangerous where the system’s defaults are “open” or insecure. For example, if there are too many clicks required for the user to get from the main page or screen to a security control panel, users are unlikely to persist through the labyrinth of clicks. - Failure to consider the needs of programmers who must code to an API will cause the intended automation patterns to be missed. Programmers are a class of users who also require that the interface they consume be intuitive enough to guide them to correct usage patterns. Because a misunderstanding of an API occurs within the program that uses it, problems may not be readily apparent (appearing perhaps only obliquely, within log files of the ongoing activity), and the debugging of the problem difficult; this failure can be one of the most difficult to find and fix. Additionally, if the API must be changed, many if not all consumers of the API may be forced into further changes, thus spreading the original failure throughout the ecosystem. - Failure to consider the possibility of “collateral damage” that can occur from included or embedded software or data in the user interface may cause an inadvertent or unintentional leaks of personal data. Consider the capture of a bystander in a personal photo taken in a public place. Even if that passerby is not a user of software, the bystander’s privacy may be compromised if that image is posted online later. - Failure to consider the user’s data during setup, use, and revocation/termination may cause unintended data to be gathered and stored against the users’ wishes, or may hold onto data that should have been removed completely after the user has stopped using the service and closed his or her account. For example, when a user decides to stop using the system, is the private data easy for the user to destroy? - Failure to consider the many different classes of users (blind users, language proficiency, children, people with different mental capabilities, etc.) will exclude those classes of users from the software -- or, alternatively, make the software too difficult to use effectively. Most importantly, when designing the security of the system, failure to consider how security is set up and used from the perspective of users with different capabilities and understandings typically causes those users to set up and make inappropriate use of the software’s security. Stepping back, our biggest recommendation is the following: Always consider the users, and any other stakeholders, in the design and evaluation of systems. There are numerous factors to consider, and there are often trade-offs; for example, improving the system with respect to one user value (such as privacy or usability) can negatively affect another user value (like ease of accessing the relevant information). In addition to the general recommendations given above, there are numerous artifacts designers can consider in order to address specific problems mentioned earlier. The decision whether to implement these specific recommendations will, however, depend on the system in question. For example, in some cases we recommend not putting security-relevant decisions in the hands of all users, as they may not possess the knowledge or context to evaluate those decisions. Similarly, because users may not know how to explore or choose between a variety of options, we recommend making the easiest and most common usage scenario also secure—a notion often referred to as “secure by default.” When users do desire to change security settings, we suggest making it as easy as possible for them to find the relevant settings. Often there is value in allowing users to test different security and privacy settings and see the results in order to understand the impact of the changes (for example, on social networks, good interfaces allow users to see their privacy-settings changes from the perspective of other users). On the other hand, it might be preferable not to give the user a choice at all; for example if a default secure choice does not have any material disadvantage over any other; if the choice is in a domain that the user is unlikely to be able to reason about; or if one user’s choice may significantly affect the system’s or the other user’s state, including security. Designers must also consider the implications of user fatigue (for example, the implications of having a user click “OK” every time an application needs a specific permission) and try to design a system that avoids user fatigue while also providing the desired level of security and privacy to the user. The field of user-focused security is rich with tensions. As a trivial example, so-called “secure” password selection strategies are also well known to lead to passwords that are hard for users to remember. A more complex example of these inherent tensions would be the need to make security simple enough for typical users while also giving sophisticated or administrative users the control that they require. We encourage designers to also consider other resources on designing security systems with stakeholders in mind. By fully considering all the relevant stakeholders, designers have the opportunity to create systems that are both secure and usable, systems that will see adoption, and systems that will be compatible with the values of users and other people impacted by them. UNDERSTAND HOW INTEGRATING EXTERNAL COMPONENTS CHANGES YOUR ATTACK SURFACE It is unlikely that you will develop a new system without using external pieces of software. In fact, when adding functionality to an existing system, developers often make use of existing components to provide some or all of that new functionality. In this context, external components refer to software “not written here,” such as: - Software procured as off-the-shelf components, platforms, and applications - Third-party open source or proprietary libraries - Widgets and gadgets added or loaded at runtime as part of a web project - Access to some functionality provided by the component that you plan to take advantage of (such as accessing a web service that provides federated authentication) - Software developed by a different team within your organization - Software that your team developed at a previous point in time (perhaps at a time when the security stance was not as mature as it is now) These components may be included as binaries, libraries, and source code, or they may exist simply as APIs. It is a common adage of software security that whenever possible, functionality should be achieved by the reuse of tried-and-true pieces of previously tested and validated software, instead of developing from scratch every time. The important distinction is that the software being newly included has actually been tried as well as tested and found to stand up to your current standards of software security. The decision to use-rather-than-build means that the software as a whole inherits the security weaknesses, security limitations, maintenance responsibility, and the threat model of whatever you are including. This inheritance can amount to a deficit of security, which must be solved, mitigated, or accounted for when the system is finished. The system’s “threat model” is a representation of the security posture of the system when all possible threats are taken into consideration, their mitigations established, and the vulnerabilities identified. Make sure you allocate time in your software development methodology to consider the security impact on your system when including an external component: - How does the external component change the threat model of the entire system? Does it add to the attack surface? Does it modify entry points in the system that had already been considered in its own threat model? - Were new features, capabilities, or interfaces added even though you are not using them? Can those unused features be disabled? - Does the external component being included also include other external components with their own security weaknesses? - Have you obtained the external component from a known, trusted source? - Does the external component provide security documentation that may help you better understand its threat model and the security implications of its configuration? You must assume that incoming external components are not to be trusted until appropriate security controls have been applied, in order to align the component’s attack surface and security policy with ones that meet your requirements. Examples of potential security issues with third-party components include the following: - Loading a library with known vulnerabilities (CWE, CVE, etc.) - Including a library with extra features that entail security risks - Reusing a library—yours or a third party’s—that no longer meets current software security standards - Using a third-party service and hoping thereby to pass responsibility of security to that service - Configuration mistakes in the security of a library—e.g., secure defaults - Library making outbound requests to the maker’s site or to some partner of theirs - Library receiving inbound requests from some external source - A single external component including other components, causing multiple levels of inclusion (“recursion”) - Including pieces of functionality that offer unknown interfaces into the system—for example, a CLI for configuration of an included daemon, a panel or admin mode for a Web component, a hardcoded set of credentials for an authentication/authorization module, a debugging interface or backdoor, or the like. At a minimum, consider the following: - Isolate external components as much as your required functionality permits; use containers, sandboxes, and drop privileges before entering uncontrolled code. - When possible, configure external components to enable only the functionality you intend to use. - If you include functionality that you do not intend to use, you must consider how that included functionality changes your security posture (attack surface, inherited debt, threats, etc.), and therefore increases the security you must implement to account for the change. - If you cannot configure the security properties of the component to align with your security goals, find another library, or document that you are accepting the risk and inform relevant stakeholders of your decision. - Likewise, if the element to be included cannot realize your security objectives, find a different element, or document that you are accepting the risk and inform relevant stakeholders of your decision. - Validate the provenance and integrity of the external component by means of cryptographically trusted hashes and signatures, code signing artifacts, and verification of the downloaded source. If no integrity mechanism is available, consider maintaining a local mirror of the library’s source. Understand the risk of dynamically including components such as JavaScript from external sources. If the external host is compromised you may be including attacker-controlled JavaScript. • Identify and follow sources that track or publish security-related information regarding the external components you consume: bug repositories, security-focused mailing lists, CVE databases, and so forth. • Make sure that the development team members charged with responding to security events are aware of all external components used so those can be included in their threat intelligence collection efforts. • Maintain an up-to-date list of consumed external components and at a pre-established cadence verify that it matches the versions included in your product, as well as that those are the latest known-secure versions available for each external component. • Maintain a healthy distrust of external components: • Whenever possible, authenticate the data-flow between your system and external components. • Consider all data coming from an external component to be tainted, until proven valid (see “Define an approach that ensures all data are explicitly validated” for additional information). • Be sure to understand and verify the default configuration of the external component. For example, if you are including an external crypto library, understand what values are used by default unless you change them; for example, sources of entropy, algorithms, and key lengths. When consuming an external component such as a Web server, understand its defaults concerning admin modes, ports where the processes will be listening, and assumptions concerning how it interfaces with the operating system and with your own software. • Document everything. If you change a default, make sure that there is documentation as to why the decision was made to change it. If you include an external component, create documentation around the process used to choose the component, the provenance of the component, the verification it went through, and most importantly any security-relevant assumption made about it. This will make it easier to move forward when versions change, or when you consider the use of an alternative external component. When changing the build defaults of external components, configuration options for deployment, or source code, automate the procedure using your version control system or a patch file (numerous tools, including make, sed, and patch, are available for this task depending on your environment). Then include the automated procedure in your build workflow—bring in the pristine component, apply your modifications, and use it for your build. The automation will help to maintain consistency between builds, and some tools include calling modes or executables that validate their own configurations; leverage those into your process as well to know when your modifications need adjustment due to a version change in the external component or some other similar event. • Design for flexibility. Sometimes an external component becomes too risky, or its development is abandoned, or the functionality it offers is surpassed by another external component. For those cases, you will want to design your system so that external components can be easily replaced. Software security must be designed for change, rather than being fragile, brittle, and static. During the design and development processes, the goal is to meet a set of functional and security requirements. However, software, the environments running software, and threats and attacks against software all change over time. Even when security is considered during design, or a framework being used was built correctly to permit runtime changes in a controlled and secure manner, designers still need to consider the security implications of future changes to objects and actors. Designers need to understand how change influences security considerations under many circumstances. There will be changes at runtime, in the form of configuration changes, enabling and disabling of features, and sometimes dynamic loading of objects. The need for security consideration will appear during testing, since all possible variations of states will need to be verified to guarantee that they uphold the security posture of the system (among, of course, other tested behavior). There will be changes at deployment when permissions, access control and other security-related activities and decisions need to take place. The addition of continuous integration processes creates a requirement for security flexibility, as changes to systems are pushed automatically and at ever shorter periodicity. Meanwhile, entropy increases in every way possible. Threats change over time. Embedded components (that is, components that are not easily reachable) will inevitably be found to be vulnerable to attacks, researchers will discover new ways to break into systems, and proprietary code will reveal itself to contain vulnerabilities. Any deployed system can eventually come under attack and potentially be compromised. And, because threats change over time, even a deployed system that has resisted attacks for a long time may eventually succumb to an attack and be compromised. Like threats, the environment and conditions under which the system exists will also change. It is a different proposition to maintain security for a system with 10 users than 10 million users—not at all a simple matter of linear scale. A system that works well in a given configuration might find itself exposed to new threats by virtue of changes to that environment; for example, the addition of a mobile interface to a legacy system. For these reasons, secure design keeps flexibility in mind. **Design for secure updates.** It is easier to upgrade small pieces of a system than huge blobs. Doing so ensures that the security implications of the upgrade are well understood and controlled. For example, a database engine upgrade may involve new access control defaults or rewrites of the controls such that previously tight permissions loosen, or create new default users that need to be disabled. If the update happens with the same change operation performed on the web server, the amount of change and adjustment to a dynamic, already-configured system may be overwhelming to track and assure. Have the system being upgraded verify the integrity and provenance of upgrade packages; make use of code signing and signed manifests to ensure that the system only consumes patches and updates of trusted origin. This is a non-trivial design consideration, as there are many details in process and implementation that may break if poorly thought out beforehand. Finally, consider the maintenance burden placed on administrative personnel. As complexity increases, there is an increasing likelihood of making mistakes. **Design for security properties changing over time; for example, when code is updated.** If the system ran in a small environment yesterday, and local users and password storage were sufficient, tomorrow the system may be changed to make use of an alternate identity management solution. In that case, the migration of previous users (and/or the correct coexistence of the local and remote users) would need to happen in a way that does not compromise security; for example, there should be consideration of user ID collisions such as when a remote and a local user have the same username. **Design with the ability to isolate or toggle functionality.** It should be possible to turn off compromised parts of the system, or to turn on performance-affecting mitigations, should the need arise. Not every vulnerability identified can be readily mitigated within a safe time period, and mission-critical systems cannot simply be taken offline until their vulnerabilities are addressed. For example, in certain environments a stateful firewall may impact performance overall, and so it is turned off – until a vulnerability that may be stopped by turning it on is identified, in which case it becomes worthwhile to bear the performance cost by turning the firewall on until a proper patch can be developed, tested, and applied. **Design for changes to objects intended to be kept secret.** History has shown us that secrets such as encryption keys and passwords get compromised. Keeping secrets safe is a hard problem, and one should be prepared to have secrets replaced at any time and at all levels of the system. This includes several aspects: • A secure way for users to change their own passwords, including disallowing the change until the old password has been successfully presented by the user. • Carefully considering any kind of “password recovery” mechanism. It is better to give the forgetful user a way to reset their password after verification via a parallel mechanism (like email) than to provide the password in clear text, which can be subverted or compromised in any number of ways. • A secure and efficient way to replace certificates, SSH keys, and other keys or authentication material that systems use, providing clear and explicit logs of those events (without including the secrets in any form!) in a forensically verifiable way (for example, external log servers and checksums). • Understanding how the key change affects data stored at rest. For example, if data are encrypted on a file system or in a database and an administrator needs to change the encryption key, is it better to decrypt all data using the current key and re-encrypt that data with the new key, or to maintain versions of encrypted data and encryption keys? **Design for changes in the security properties of components beyond your control.** Tech marches on. A cipher that was considered secure yesterday may be found to be less secure today, either by the discovery of an active attack against it or by improvements in hardware and software able to defeat that security control. In the same way, an external component’s security properties or related characteristics may change over time, as when an Open Source project is abandoned and its code not actively maintained, or when its license changes, forcing users to abandon it. In these cases it is important to design “agility,” the capability to change layers and algorithms as needed, into the system. Good examples include Java’s capability to change crypto providers without recompilation of classes, and Apache’s capability of specifying a list of ciphers it is willing to negotiate with a client. Many hours of development and much grief over security flaws have been avoided due to these capabilities. Good design allows for intermediate layers of abstraction between code and imported external APIs, so that developers can change components providing needed functionality without changing much of the code. **Design for changes to entitlements.** Systems are sometimes designed in which support staffers have privileged access to certain parts of the system in order to perform their job. However, the support staff’s access to various system components likely changes over time. Individuals leave the organization, job functions change, they go on extended leaves or sabbaticals, system functionality changes, and so on. The system must have a way to revoke access to areas when a user no longer has a need to access them. This revocation of access should be part of an existing auditing mechanism in which access to critical system components is regularly reviewed to confirm that those individuals with access still require that level of access. As stated in the mission statement, the IEEE Computer Society Center for Secure Design will provide guidance on: - Recognizing software system designs that are likely vulnerable to compromise. - Designing and building software systems with strong, identifiable security properties. This document is just one of the practical artifacts that the Center for Secure Design will deliver. Interested in keeping up with Center for Secure Design activities? Follow @ieeeCSD on Twitter, catch up with us via cybersecurity.ieee.org, or contact Kathy Clark-Fisher, Manager, New Initiative Development (kclark-fisher@computer.org). About IEEE Computer Society IEEE Computer Society is the world's leading computing membership organization and the trusted information and career-development source for a global workforce of technology leaders. The Computer Society provides a wide range of forums for top minds to come together, including technical conferences, publications, and a comprehensive digital library, unique training webinars, professional training, and the TechLeader Training Partner Program to help organizations increase their staff’s technical knowledge and expertise. To find out more about the community for technology leaders, visit http://www.computer.org.
{"Source-Url": "https://csis.gmu.edu/ksun/AIT681-s20/readings/Top-10-Flaws.pdf", "len_cl100k_base": 13647, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 75268, "total-output-tokens": 14765, "length": "2e13", "weborganizer": {"__label__adult": 0.0003085136413574219, "__label__art_design": 0.0007576942443847656, "__label__crime_law": 0.0005521774291992188, "__label__education_jobs": 0.0007300376892089844, "__label__entertainment": 4.51207160949707e-05, "__label__fashion_beauty": 0.00012946128845214844, "__label__finance_business": 0.00022232532501220703, "__label__food_dining": 0.00022935867309570312, "__label__games": 0.0005130767822265625, "__label__hardware": 0.0008454322814941406, "__label__health": 0.00028586387634277344, "__label__history": 0.00014603137969970703, "__label__home_hobbies": 6.347894668579102e-05, "__label__industrial": 0.0002532005310058594, "__label__literature": 0.0001596212387084961, "__label__politics": 0.00023686885833740232, "__label__religion": 0.0003349781036376953, "__label__science_tech": 0.00884246826171875, "__label__social_life": 5.501508712768555e-05, "__label__software": 0.007549285888671875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0001838207244873047, "__label__transportation": 0.0002713203430175781, "__label__travel": 0.00011801719665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73835, 0.00169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73835, 0.63541]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73835, 0.93105]], "google_gemma-3-12b-it_contains_pii": [[0, 276, false], [276, 1660, null], [1660, 2226, null], [2226, 4542, null], [4542, 5824, null], [5824, 6282, null], [6282, 8301, null], [8301, 8301, null], [8301, 10896, null], [10896, 13433, null], [13433, 16376, null], [16376, 20603, null], [20603, 23098, null], [23098, 25573, null], [25573, 29417, null], [29417, 31734, null], [31734, 35664, null], [35664, 38599, null], [38599, 40430, null], [40430, 42562, null], [42562, 45849, null], [45849, 48621, null], [48621, 52049, null], [52049, 55469, null], [55469, 57523, null], [57523, 60758, null], [60758, 64265, null], [64265, 66602, null], [66602, 69497, null], [69497, 72566, null], [72566, 73835, null], [73835, 73835, null]], "google_gemma-3-12b-it_is_public_document": [[0, 276, true], [276, 1660, null], [1660, 2226, null], [2226, 4542, null], [4542, 5824, null], [5824, 6282, null], [6282, 8301, null], [8301, 8301, null], [8301, 10896, null], [10896, 13433, null], [13433, 16376, null], [16376, 20603, null], [20603, 23098, null], [23098, 25573, null], [25573, 29417, null], [29417, 31734, null], [31734, 35664, null], [35664, 38599, null], [38599, 40430, null], [40430, 42562, null], [42562, 45849, null], [45849, 48621, null], [48621, 52049, null], [52049, 55469, null], [55469, 57523, null], [57523, 60758, null], [60758, 64265, null], [64265, 66602, null], [66602, 69497, null], [69497, 72566, null], [72566, 73835, null], [73835, 73835, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73835, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73835, null]], "pdf_page_numbers": [[0, 276, 1], [276, 1660, 2], [1660, 2226, 3], [2226, 4542, 4], [4542, 5824, 5], [5824, 6282, 6], [6282, 8301, 7], [8301, 8301, 8], [8301, 10896, 9], [10896, 13433, 10], [13433, 16376, 11], [16376, 20603, 12], [20603, 23098, 13], [23098, 25573, 14], [25573, 29417, 15], [29417, 31734, 16], [31734, 35664, 17], [35664, 38599, 18], [38599, 40430, 19], [40430, 42562, 20], [42562, 45849, 21], [45849, 48621, 22], [48621, 52049, 23], [52049, 55469, 24], [55469, 57523, 25], [57523, 60758, 26], [60758, 64265, 27], [64265, 66602, 28], [66602, 69497, 29], [69497, 72566, 30], [72566, 73835, 31], [73835, 73835, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73835, 0.06612]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9d77550ac303c498483db738fdcc5a4565be6c9b
Mental imagery and software visualization in high-performance software development teams How to cite: For guidance on citations see FAQs © 2010 Elsevier Ltd Version: Accepted Manuscript Link(s) to article on publisher’s website: http://dx.doi.org/doi:10.1016/j.jvlc.2009.11.001 Copyright and Moral Rights for the articles on this site are retained by the individual authors and/or other copyright owners. For more information on Open Research Online’s data policy on reuse of materials please consult the policies page. Mental imagery and software visualization in high-performance software development teams Marian Petre PII: S1045-926X(09)00075-5 DOI: doi:10.1016/j.jvlc.2009.11.001 Reference: YJVLC 465 To appear in: Journal of Visual Languages and Computing Received date: 28 September 2006 Revised date: 16 October 2009 Accepted date: 9 November 2009 Cite this article as: Marian Petre, Mental imagery and software visualization in high-performance software development teams, Journal of Visual Languages and Computing, doi:10.1016/j.jvlc.2009.11.001 This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. Mental imagery and software visualization in high-performance software development teams Marian Petre Centre for Research in Computing Open University Milton Keynes MK7 6AA UK m.petre@open.ac.uk phone: +44 1908 65 33 73 fax: +44 1908 65 21 40 ABSTRACT This paper considers the relationship between mental imagery and software visualization in professional, high performance software development. It presents overviews of four empirical studies of professional software developers in high-performing teams: (1) expert programmers’ mental imagery, (2) how experts externalize their mental imagery as part of teamwork, (3) experts’ use of commercially available visualization software, and (4) what tools experts build themselves, how they use the tools they build for themselves, and why they build tools for themselves. Through this series of studies, the paper provides insight into a relationship between how experts reason about and imagine solutions, and their use of and requirements for external representations and software visualization. In particular, it provides insight into how experts use visualization in reasoning about software design, and how their requirements for the support of design tasks differ from those for the support of other software development tasks. The paper draws on theory from other disciplines to explicate issues in this area, and it discusses implications for future work in this field. *Keywords:* Software visualization, empirical studies, high-performance programming, teamwork ### 1. Introduction: the relationship between software visualization and human tasks Richard Hamming wrote that “*The purpose of computing is insight, not numbers*”[1]. Arguably, the role of software visualization lies in supporting and promoting insight about software – and hence supporting human reasoning, both individual and shared. In order to do so, software visualization must be consistent with human perception and cognition, as well as with human tasks that frame them. Hence, it is difficult to consider software visualization without also considering how effective software developers reason about software, and how that human capability might be supported and extended using visual cues and devices[2]. This paper considers the relationship between reasoning about software (including mental imagery employed during design and communicating design ideas) and software visualization in professional, high performance software development. Software visualization has its roots in the earliest software development practice, when programmers watched the lights on the computer’s control panel and listened to the sounds of disk access to try to understand what the program was doing in the absence of other perceivable cues. Software lacks tangibility and visibility (e.g., What does a compiler look like? What is the size, weight and shape of an operating system?). Code may be manifest, but how code works must be discovered and understood. Software visualization is concerned with using visual or graphical techniques (sometimes enhanced by audio) to provide perceivable cues to potentially obscure aspects of software systems, in order to reveal patterns and behaviours that inform software comprehension through all stages of software development. This paper examines the relationship between software visualization and effective software developers’ reasoning specifically during design and generation. Goel [3] argues, in the context of external representation, that there is a principled distinction to be made between design and non-design problems. That distinction is pertinent here. There is substantial take-up of software visualization in the comprehension and maintenance of legacy systems, for example. Yet it is not clear that visualizations designed to support comprehension and debugging are effective in supporting design and initial development. It is difficult to consider software visualization without also considering the task it is meant to support, and it is unlikely that any single software visualization tool can address all software development tasks simultaneously [4]. Each task requires a different understanding – and a different view – of the software. That view must be expressed through visual cues and devices that bridge effectively between properties of software and human reasoning. The challenge is to identify the most appropriate visualization for a given task. This paper focuses on experts and their high-performing teams, on the basis that they have a track record of effective reasoning and effective communication about software. The paper looks across a series of empirical studies of expert and high-performing team behaviour and reasoning to consider whether there are lessons from the experts’ own imagery and tools that might inform software visualization to support software design and generation. Each study addresses a different facet of the continuum between what experts conceive in their minds and what they and their teams create in code. Hence, in describing each study in turn, the paper steps from expert mental imagery, through externalization of that imagery in communicating design concepts, to visualization tools experts use (or do not use) and what characterizes them. It considers which tasks experts choose to support with visualizations during software design and generation, and the nature of the visualizations they use to address them. The purpose is to articulate what expert software developers visualize in their minds and how they represent their conceptions externally, in order to consider what roles visualization might play in supporting their reasoning and communication during development tasks. The sections are organized as follows: a distillation of relevant findings from the literatures on mental imagery, expert problem-solving, memory and schema; a description of the context common to all of the studies; descriptions of each of the empirical studies; a discussion of the implications of the findings for software visualization; and a conclusion. 2. Background: relevant observations from the literature Software visualization has developed apace with visual techniques, from early ‘pretty printers’, which use typographic enhancements such as indentation and colour coding (e.g., [5]), to 3D ‘landscapes’ representing the structure of large software systems, to multiple, linked, dynamic visualizations of the interaction of system components at runtime. The literature provides a number of good overviews (e.g., [6], [7], [8], [9]) and taxonomies (notably, [10], [4]). Various visualization tools are available on the web, on individual websites, and in compilations such as ‘SCG Smallwiki CodeCrawler: A non-exhaustive list of software visualization tools’ [11]. Yet, despite developments in technology and techniques, questions remain about how well visualizations actually support human reasoning and tasks – and about how well they relate to the mental imagery that proficient software developers use in reasoning and communicating about software. There is widespread anecdotal evidence (e.g., Lammers’s interviews of well-known programmers [12]) that programmers make use of visual mental images and mental simulations when they are designing programs. There are broad, well-established literatures on mental imagery, expert problem-solving, memory and schema that, although they do not specifically address software design, can contribute to our thinking about imagery and representation in this context. Although there is apparently little empirical research on programmers’ mental imagery per se, there is a well-established literature on expert problem solving which, taken as a body, suggests a pattern of mental structure building preliminary to efficient externalisation of solutions. Lesson 1: Visualizations should provide appropriate abstractions, while supporting detailed enquiry and systematic exploration – the implication is that visualizations should be explorable and manipulable at different levels. Rationale: Expert problem solvers (across domains) differ from novices in both their breadth and organisation of knowledge; experts store information in larger chunks organized in terms of underlying abstractions (see [13] and [14] for reviews). When categorizing problems, experts sort in terms of underlying principles or abstract features (whereas novices tend to rely on surface features) (e.g., [15], [16]). Experts form detailed conceptual models incorporating abstract entities rather than concrete objects specific to the problem statement [17]. Their models accommodate multiple levels and are rich enough to support mental simulations [18], [19]. There is a recurrent theme of overview and abstraction in expert reasoning which has relevance for tool-building, for which it raises the issue: how can a tool show the essence rather than the superficial? **Lesson 2:** Visualizations should offer *relevance*: the visualization should map to appropriate referents and schema. **Rationale:** It was Bartlett [20] who first suggested that memory takes the form of schema which provide a mental framework for understanding and remembering. In general, the term ‘schema’ is used to indicate a form of mental template which organizes cognitive activity such as memory, reasoning, or behaviour. Key characteristics or elements in the schema have ‘slots’ in the mental template: the slots represent the range of values acceptable for those key characteristics. Schema may be of varying levels of complexity and abstraction; their importance is in providing structure and economy. Chi *et al.* [21] suggest that the nature of expertise is due largely to the possession of schemas that guide perception and problem solving – i.e., experts have more and better schemas than novices. Simon [22] observes that, when a task is ill-defined, users resort to pre-existing concepts: stereotypes, schemata, or other knowledge. Cole and Kuhlthau [23] see the use of schemata as fundamental to sense-making at the outset of problem solving: the problem-solver invokes a schema or model of the problem in order to create a frame of reference and hence to identify the initial problem state. On one hand, the use of existing schemata enables the user to take some action in unfamiliar or ill-defined tasks. On the other hand, the use of existing schemata can lead to misconception, mis-action, or fixedness [24]. **Lesson 3:** Visualizations should support *selection of focus*: allowing selection of focus of particular subsets of information (while maintaining access to other information), allowing selective highlighting of different aspects of the software, matching representation to focus, supporting different perspectives through different visualizations. Rationale: Experts often engage in systematic exploration, whereas novices are less likely to engage in exploratory interactions [25]. Lesson 4: Visualizations should associate perceptual cues with aspects of semantic importance – for example, associating vividness and saliency with key functionality (and avoiding drawing attention to what is less important). Visualizations should provide perceptual cues for deep structures and functionality – in order to assist in the detection of underlying patterns and structures. Rationale: The well established literature on memory (dating back over a century to pioneers such as Ebbinghaus [26]) presents a number of findings that highlight the role of perceptual cueing in recall. Things which are easily visualized can be held in memory more easily – there is an entire literature on mnemonics, for instance (e.g., [27]). More vivid images tend to be better remembered, although there is little or no correlation between the vividness and the accuracy of a memory [28]. Order of presentation affects recall: the last items in a list tend to be remembered best (‘recency’), as do those which are presented earliest and are therefore most rehearsed or processed (primacy). Recall is weaker than recognition. Memory is an active encoding process, not a passive recording process, and is subject to distortions and bias at the encoding and retrieval stages. The encoding typically involves schemata, and this has many implications for the field of visualization research (Bartlett [20] and others). Lesson 5: Visualizations should use multiple, varied perceptual cues, and they should link different visualizations or representations to promote discovery and insight. Rationale: The psychology literature on mental imagery, informed recently by neurological and experimental evidence that imagery involves different systems (visual, spatial, verbal, temporal, lessional/semantic) which are usually handled in different parts of the brain, gives reason to consider that we maintain multiple mental representations and that imagery is encoded in multiple, different modalities and systems (e.g., [29], [30], [31]). Many of the hypotheses about sources of insight are based on interactions between encodings. For example, Anderson and Helstrup [32] argue that mental imagery is a source of discovery and synthesis. Bartlett [33] wrote that imagery leads into bypaths of discovery. Logie [34] described an economy of images in memory, through which access to previously unrelated bits of information might be achieved: many informative elements are integrated together in a structural whole, increasing the available amount of information in working memory. Lindsay [35] claimed that images allow inferences that are not based on proof procedures. The most pertinent shortcoming of the imagery literature is the tasks involved. Most of the studies deal with particular, usually concrete or real world images and simple tasks. Hence their conclusions might not generalize to a realm in which the imagery concerns complex, abstract, imagined images. **Lesson 6:** Visualizations should provide support for early conceptual activity. **Rationale:** Experts tend to spend more time than novices planning and evaluating. Experts are better able to form overviews, but thereafter they take longer to develop their understanding and representations, and they consider interactions among functions or components of a system more fully [36]. Similarly, Card, Mackinlay and Shneiderman [37] “propose six major ways in which visualization can amplify cognition…: (1) by increasing the memory and processing resources available to the users, (2) by reducing the search for information, (3) by using visual representations to enhance the detection of patterns, (4) by enabling perceptual inference operations, (5) by using perceptual attention mechanisms for monitoring, and (6) by encoding information in a manipulable medium.” (p. 16) Do we really understand enough about the design of software visualizations to realize these gains? Petre, Blackwell and Green [2] raised a number of cognitive issues facing software visualization, many of them identifying the gaps between the sorts of potential gains enumerated above, and the specific insights and techniques required to provide sufficiently selective and well-designed visualizations which focus appropriately on human cognitive activity. Again, the missing link is between human reasoning about software and what is visualized. The goal of the series of studies presented in this paper was to examine how expert software developers realize the link between their own reasoning and their representations. **3. Context: which experts, which software, which overall tasks?** To reiterate: the focus of this paper is on the relationship between expert mental imagery and software visualization in the context of *software design and generation*, rather than legacy software comprehension. The motivation arises from a broader interest in what distinguishes expert reasoning and behaviour in the development of software, and how they might be supported, and the goal of this series of studies was to expose how experts bridge between their internal reasoning and various externalizations, realizations, and implementations of that reasoning. From this, it was hoped that lessons could be identified that would benefit software visualization. Hence the studies examined how experts reasoned about software, elicited their imagery, and examined the sorts of tools they used and built to support their own reasoning. All four studies fit into the broad category of research which Ball and Ormerod [38] characterized as ‘cognitive ethnography’. They view expert activity in context, while contributing to the understanding of cognition ‘in-the-head’, hence attending to “the interplay between people-laden contexts and expert cognition” (p. 148). Ball and Ormerod characterize cognitive ethnography as: observationally specific: using small-scale data collection based around representative time slices of situated activity. purposive: focusing on selected issues within existing work practices, and verifiable: in terms of validating observations across observers, data sets and methodologies. The studies reported here involved observing and characterizing the nature of practice as it is found. The endeavour concerns identifying key concepts, representations, strategies, and processes in expert practice, rather than profiling cases in terms of existing categories or theories, and therefore it is elicitative, descriptive, and inductive in nature, and a qualitative approach was adopted as appropriate. It is the existence and nature of the expert practice that is of interest, rather than its frequency. Hence, the emphasis is on qualitative rather than quantitative analysis, although a quantitative consideration of prevalence and impact might well be a matter for further work. The approach provides a means for identifying patterns across individuals: identifying phenomena of interest, cataloguing behaviours and strategies, identifying key factors, and focusing questions for further study. It is informed by (and triangulates among) a variety of inputs, including: direct observation, talk-aloud protocols, interviews, environments and artefacts. 1.1 The experts The experts, from both industry and academia, and from several countries in Europe and North America, share the same general background: all have ten or more years of programming experience; all have experience with large-scale, real-world, real-time, data- and computation-intensive problems; and all are acknowledged by their peers as expert. All are proficient with programming languages in more than one paradigm. The coding language used was not of particular interest in these investigations, but, for the record, a variety of styles was exercised in the examples, using languages including APL, C, C++, Hypercard, Java, common LISP, macro-assembler, Miranda, Prolog, and SQL. Their preferred language was typically C or C++, because of the control it afforded, but the preference did not exclude routine verbal abuse of the language. 1.2 The companies and teams All were small teams of 3 to 12 members, all included at least one expert software developer of the calibre of ‘super designer’ [39], and all were in companies where the generation of intellectual property and the anticipation of new markets characterized the company’s commercial success. All were high-performance teams: effective intellectual-property-producing teams that tend to produce appropriate products on time, on budget, and running the first time. The companies were small, not more than 200-300 employees, although some were autonomous subsidiaries of much larger companies. 1.3 The domains Most teams were undertaking large, long-term (1- to 2-year) projects. Often the software was one component of a multi-disciplinary project including computer hardware and other technology. Industries included computer systems, engineering consultancy, professional audio and video, graphics, embedded systems, satellite and aerospace – as well as insurance and telecommunications. Software developers generate between 5 and 10,000 lines of code per compile unit, typically around 200 lines per compile unit, with on the order of 3,000 files per major project. It is important to note that these experts work in relatively small companies or groups that typically produce their own software rather than working with legacy systems. The software they produce is ‘engineering software’ rather than, for example, information systems, although products may include massive data handling and database elements. 1.4 Limitations It is important to note that this work is based on studies in a specific context, one determined pragmatically – by which companies were willing to allow access to their expert software developers. The results presented may not generalize beyond this variety of design and this style of working. Experts are well-known for rationalizing their practice ‘on-the-fly’. As reported by Schooler, Ohlsson and Brooks [40], there is evidence that solving insight problems relies on essentially non-reportable processes, even that verbalisation interferes with some important thought processes. The dangers of elicitation techniques such as interviews are well-known, as reported in [41]. The main problems with data from verbal reports are the likelihood that cognitive processes of interest are not accessible to introspection, and the possibility that the experimenter might bias the response by asking only for certain types of report. On the other hand, although subjective tests may be suspect, they have in some cases been shown to be reliably consistent, and to produce results just as good as those from more objective tests [42] There is some evidence that self-ratings do correlate with demonstrated ability [43] and are stable in cases where they do. These studies relied on subjects whose reports of activity in earlier studies corresponded well to other evidence of their activity, such as notes and observed actions, i.e., it relied on subjects who appeared to be ‘good self-reporters’. Similarly, the quality of observation depends on the quality of the observer. Rigour in qualitative analysis derives from systematic practice, including the ‘operationalisation’ of categories as categorisation rules and exemplars that can be employed by other researchers; coding and indexing in a way that ensures an audit trail between data and conclusions, and vigilance in seeking counter-examples to and inconsistencies with apparent patterns. The limitations of verbal reports highlight the need for different forms of input, and for ‘triangulation’ among inputs in order to examine a proposition. Validation in these studies is based on critical scrutiny of the findings by the informants, and on subsequent expert review. 4. Study 1: Software developers’ mental imagery So what evidence is there about the nature of software developers’ mental imagery? A previous paper [44] describes a study into the mental imagery of ten individual expert software developers, who were questioned directly regarding the nature of their mental representations while they were engaged in a design task. Study description - observation and interview of 10 software developers: Ten expert software developers were asked to design solutions to one of four problems (an interactive noughts and crosses player, an academic timetable maker, a sub-anagram solver, or a pinball path predictor) or to a problem of their choice. The experts were asked to imagine themselves free of coding restrictions, and they were not asked to implement the solutions as code. The small but non-trivial problems were chosen to evoke rich discussions by addressing classic issues in data representation and by admitting both standard and innovative treatments (the effective task is therefore the generation of potential solutions within this framework). Transcripts taken throughout the task provided a record of software developer remarks and the contexts in which they were made; i.e., the transcripts provided a record of which remarks were spontaneous and which were prompted by questions keyed to the moments when (or the moments just after) the experts showed signs of internally-focussed thinking, such as pauses in writing activity, closing eyes, staring at blank paper (fixedly or with eye movement), or gesturing in the air. Prompting questions were general, e.g., “What do you see?”, “What can you hear?”, “What’s going on?”, “Where are you now?”. After the tasks were completed, the experts were interviewed about their previous responses and about their imagery in general. All notes and other products were collected, and the sessions were recorded. The analysis was inductive and iterative. Notes and transcripts were examined for the imagery descriptions they contained, and those imagery accounts were grouped in terms of patterns and common elements. The groups were data-driven, not shaped by any pre-existing framework. Attention was given to the context in which the account was given. Particular attention was given to the spontaneity of the accounts (as reflected by the fluency and immediacy of description and by the amount of prompting required), to the programmer’s satisfaction with the account, and to any discrepancies. Patterns (and observations about them) were articulated explicitly. The data were re-examined, seeking contradictions or inconsistencies that might challenge the induced patterns. Hence, questions or conjectures that arose during examination of the data became the focus of another systematic iteration through the data. This is consistent with the ‘constant comparison’ method suggested by Glaser and Strauss[45], although this study did not include the iterations of data collection suggested by grounded theory. This study consisted of structured observations and interviews attempting to elicit introspective reports of mental imagery, not a controlled laboratory experiment. The experts, all familiar informants whose reports of activity in earlier studies corresponded well to other evidence of their activity, demonstrated a readiness to describe the form and content of their thinking. The main images are as follows, each with an example from one of the informants in italics (see [44] for a more complete summary): **dancing symbols** (“text with animation”): “… it moves in my head … like dancing symbols … I can see the strings [of symbols]… assemble and transform, like luminous characters suspended behind my eyelids …” **mental descriptions or discussion** (mental verbalisations): “I’m just talking to myself …” **auditory images** (auditory presentations of solution characteristics, with auditory qualities like loudness or tone reflecting some aspect of the solution): “It buzzes … there are things I know by the sounds, by the textures of sound or the loudness … it’s like I hear the glitches, or I hear the bits that aren’t worked out yet …” **visual imagery**, e.g.: “values as graphs in the head … flip into a different domain … transform into a combined graph … (value against time; amplitude against frequency; amplitude against time) …” **machines in their minds** (dynamic mental simulations, of three sorts: abstract machines, pictures of implementations, and mechanical analogies), e.g.: “… nets of stuff … poking inputs and see what filters through … sucking: I’m ready, send me another one …” **surfaces** (a strongly spatial, mathematically-oriented imagery of ‘solution surfaces’), e.g.: “It’s like driving across a desert looking for a well. What you actually have is good solutions distributed across this desert like small deep wells and your optimizer trundling along looking for them…” **landscapes** (a strongly spatial imagery, a surface or landscape of solution components over which they could ‘fly’), e.g.: “… it’s like flying over landscape of stuff I’m thinking about, parts of the solution … I can see what the terrain is like, where I am and keep an eye on stuff on the horizon, or I can close in on something…” **presences** (a sort of imagery that was not verbal, visual, or physical; an imagery of presence (or knowledge) and relationship), e.g.: “… no place holders, no pictures, no representation … just the notion, the symbol entities, semantic entities and the linguistic token … atomic notions. They just ‘are’.” There were some common characteristics of the imagery, *viz*: **stoppably dynamic:** All of the images were described as dynamic, but subject to control, so that the rate could be varied, or the image could be frozen. **variable selection:** The ‘resolution’ of the imagery was not uniform; the experts chose what to bring into and out of focus. **provisionality:** All of the imagery could accommodate incompleteness and provisionality, which were usually signalled in the imagery in some way, e.g., absence, fuzziness, shading, distance, change of tone. **many dimensions:** All of the experts reported using more than four dimensions. The extra dimensions were usually associated with additional information, different views, or strategic alternatives. **multiplicity:** All of the experts described simultaneous, multiple imagery. Some alternatives existed as different regions, some as overlaid or superimposed images, some as different, unconnected mental planes. **naming:** Although some of the imagery was largely non-verbal, the experts all talked about the ready ability to label entities in the imagery. The findings were presented to and discussed with the informants, who were asked to identify any inaccuracies. Although not all of the informants experienced all the forms of imagery collected, all were satisfied with the capture and categorization of their own. 5. Study 2: Externalisation of mental imagery A key question in this area is whether personal mental imagery ever becomes public or externalized. A follow-on question is whether personal mental imagery would be of any use if it does become public. Some images and imagery, for instance, may be extremely useful to the individual, but by their nature may be very difficult to describe verbally and to use as a shared metaphor, because they are not well suited to reification and shared physical representations (such as diagrams, gestures, physical analogies, etc). It seems intuitively obvious that there are times when imagery does become externalized and when the externalisation is useful. Yet there is little published evidence of effective, direct externalisation of personal mental imagery in software development, apart from introspective justifications for software tool design. This section reports a form of externalisation which has been observed to occur naturally in high-performance development teams: when an individual’s mental image becomes focal to team design activity and reasoning [46]. Study description – observation over time, focused on 5 teams in 3 companies, 10 projects overall: The evidence discussed here is a ‘by-product’ of other studies: initially, of the mental imagery study summarized above; subsequently, of a number of other in situ observational studies of early design activity. Those studies had other issues as their focus, for example design representations and processes used by multi-disciplinary concurrent engineering teams, representations (including ephemeral ones) used in early ideas capture, and group discussions and processes in very early conceptual design. Thus, the core evidence was accumulated from five different software development teams and ten different projects in three different companies over a period of some five years. The data collection was opportunistic, recording relevant examples as they arose naturally. The records collected for this study included audio recordings, detailed contemporaneous field notes, photographs of whiteboards and other artefacts, and photographs or photocopies of documents, including ephemera. Again, the analysis was inductive, identifying patterns and common elements among the examples, following the iterative, data-driven process described for the previous study. Again, the captured accounts were presented to the informants, who were asked to scrutinize critically the accuracy of the records and their analysis. Only one example of a focal image is given below, but each group manifested at least one example, and the observations reported are representative of all examples. One typical example arose in the context of the mental imagery study described above. The expert was thinking about a problem from his own work and articulated an image: “...the way I’ve organized the fields, the data forms a barrier between two sets of functions...It’s kind of like the data forming a wall between them. The concept that I’m visualizing is you buy special things that go through a wall, little ways of conducting electrical signals from one side of a wall to another, and you put all your dirty equipment on one side of a wall full of these connectors, and on the other side you have your potentially explosive atmosphere. You can sort of colour these areas...there’s a natural progression of the colours. This reinforces the position queues...There’s all sorts of other really complex data interlinkings that stop awful things happening, but they’re just infinitely complex knitting in the data. (Of course it’s not pure data...most of the stuff called data is functions that access that data.) The other key thing...is this temporal business we’re relying on...the program is a single-threaded program that we restrict to only operate on the left or on the right...a hah!...the point is that the connections to the data are only on one side or the other. The way I organize the data is...a vertical structure, and the interlinkings between data are vertical things...vertical interlinkings between the data tell me the consistency between the data, so I might end up, say, drawing between the vertically stacked data little operator diagrams...” After he described the image fully, he excused himself and went down the corridor to another team member, to whom he repeated the description, finishing “And that’s how we solve it.” “The Wall” as it became known, became a focal image for the group. 1.5 How they occurred In the observed examples, the mental imagery used by a key team member in constructing an abstract solution to a design problem was externalized and adopted by the rest of the team as a focal image. The images were used both to convey the proposed solution and to co-ordinate subsequent design discussions. The examples all occurred in the context of design, and the images concerned all or a substantial part of the proposed abstract solution. 1.6 The nature of the images The images tend to be some form of analogy or metaphor, depicting key structural abstractions. But they can also be ‘perspective’ images: ‘if we look at it like this, from this angle, it fits together like this’ — a visualization of priorities, of key information flows or of key entities in relationship. The image is a conceptual configuration which may or may not have any direct correlation to eventual system configuration. 1.7 The process of assimilation In all of the examples observed, the image was initially described to other members of the team by the originator. Members of the team discussed the image, with rounds of ‘is it like this’ in order to establish and check their understanding. Although initial questions about the image were inevitably answered by the originator, the locus did shift, with later questions being answered by various members of the team as they assimilated the image. The image was ‘interrogated’, for example establishing its boundaries with questions about ‘how is it different from this’; considering consequences with questions like ‘if it’s like this, does it mean it also does that?’; assessing its adequacy with questions about how it solved key problems; and seeking its power with questions about what insights it could offer about particular issues. In the course of the discussion and interrogation, the image might be embellished – or abandoned. 1.8 They are sketched Sketching is a typical part of the process of assimilation, embodying the transition from ‘mental image’ to ‘external representation’. The sketches may be various, with more than one sketch per image, but a characteristic of a successful focal image is that the ‘mature’ sketches of it are useful and meaningful to all members of the group. This fits well with the literature about the importance of good external representations in design reasoning (e.g., [47], [48], and others). Ko, DeLine and Venolia [49], in a study of the information needs of software developers, found that design questions about intent and rationale were among the most difficult to satisfy. These ‘mature’ sketches, with their shared interpretation, provide a means of preserving intent and rationale within the team. 1.9 Continuing role reflected in team language If the image is adopted by the team, it becomes a focal point of design discussions, and key terms or phrases relating to it become common. Short-hand references to the image are incorporated into the team’s jargon to stand for the whole concept. But the image is ‘team-private’; it typically does not get passed outside the team and typically does not reach the documentation. Although the ‘mature’ sketches noted above might be strong candidates for documentation, as they embody tested collective understanding, whether that understanding would persist over time and outside the coordination process is an open question. Ko, DeLine and Venolia [49] highlight that “Even when developers found a person to ask, identifying the information they sought was hard to express…” The short-hand references may limit the utility for persistent documentation, creating issues of interpretation for downstream developers who were not part of the coordinated design and development. 1.10 Imagery as a coordination mechanism The images discussed and interrogated by the team provide a coordination mechanism. Effective coordination will by definition require the use of images which are meaningful to the members of the group. The literature on schema provides explanation here (e.g., [20]). Coordination – meaningful discourse – requires shared referents. If there is a shared, socially-agreed schema or entity, this can be named and called into play. But what happens when the discourse concerns an invention, an innovation, something for which there is no existing terminology, no pre-existing schema? A preverbal image in the mind of the originator, if it cannot be articulated or named, is not available to the group for inspection and discussion. The use of extended metaphor, with properties in several different facets, provides a way of establishing a new schema. In describing the image, the originator is establishing common reference points. The recipient chooses what is salient in the properties of interest. By interrogating and discussing the image and its implications with the originator, the recipients are establishing a shared semantics, and the originator is co-ordinating with the rest of the team (cf. Shadbolt’s research [50] on people’s use of maps and the establishment of a common semantics). The discussion of the metaphor allows the team to establish whether they understand the same thing as each other. The establishment of a richly visualized, shared image (and the adoption of economical short-hand references) facilitate keeping the solution in working memory (e.g. [34]). It is interesting to note that this co-ordination issue has been taken on board by recent software development methodologies, which often try to address it by creating an immersive environment of discourse and artefacts which is intended to promote regular re-calibration with the other team members and with artefacts of the project. For example, ‘contextual design’ [51] describes ‘living inside’ displays of the external representations in order to internalize the model (to take it into one’s thinking), referring to the displayed artefacts as “public memory and conscience”. In another example, ‘extreme programming’ [52] emphasizes the importance of metaphor, requiring the whole team to subscribe to a metaphor in order to know that they are all working on the same thing. In that case, the metaphor is carried into the code, for example through naming. So, individual imagery does sometimes enter external interaction. The mental imagery used by a key team member in constructing an abstract solution to a design problem can in some cases be externalized and adopted by the rest of the team as a focal image. Discussing, sketching and ‘interrogating’ the image helps the team to co-ordinate their design models so that they are all working on the same problem – which is fundamental to the effective operation of the team. 6. Study 3: To what extent do these software developers use available visualization tools? Given the naturally-occurring use of image, abstraction, and sketches and other external design representations, to what extent do these software developers use available visualization tools to help them? This section reports on opportunistic interviews with proficient software developers about their use of (or reluctance to use) available software visualization tools. Study design – interviews of 12 software developers in 3 companies, and review of practice: Semi-structured interviews were conducted with 12 software developers in high-performing teams, from 3 different companies. This study was conducted in the shadows of other studies into other aspects of software development (as described in Section 5 above). Key team members were interviewed – those likely to make decisions on models and solutions as well as decisions on tools. The interviews included a ‘guided tour’ of their libraries of applications that included software visualization. The ‘tours’ were led by the informants. Questions were asked only when the protocol was not covered spontaneously. For each relevant application, information was collected regarding: A general description, including what sorts of visualizations it afforded; How it had been used, for which tasks, with examples if available; To what extent it had been used; If it was still in use, what it was considered to be ‘good for’; If it had been abandoned, why it had been set aside. Detailed notes were taken (including verbatim quotation), and audio recordings were made where permitted. The face-to-face semi-structured interviews were augmented with telephone and email queries to additional informants. Again, the analysis was inductive, identifying patterns and common elements among the accounts. Material from additional informants was not used in the initial analysis, but was used as a validation sample for the observations drawn from the principal data. The software developers talk about software visualization with respect to three major activities: comprehension (particularly comprehension of inherited code), debugging, and design reasoning. These software developers showed no reluctance to investigate potential tools, and they described trials of tools as diverse as Burr-Brown DSP development package, Cantata, MatLab, Metrowerks Code Warrier IDE, Mind Manager, MS Project, MS-Select, Rational Rose, Software through Pictures (STP), and Visio (among others). Tools they did take up usually related to software comprehension and debugging. In that context, the tools that persisted were those which were robust, worked at scale, and associated well with their preferred software development environment. But even in debugging and comprehension, the experts relied more on their own systematic practices than on visualizations – and their use of available visualizations related to how directly the visualizations supported their practices. Tools which simply represented available information, which failed to provide forms of abstraction or selection, or which embodied assumptions at odds with the experts’ practices were discarded. Take-up was extremely low in the context of design and generation. The exception was for general tools such as MatLab which allowed software developers to realize their own visualizations (as is discussed in the next section) – in effect for visualization-builders rather than visualizations per se. So what makes other tools into shelf-ware? (Please note that ‘Not invented here’ was never offered as a reason not to use a tool.) reliability: Packages that crashed or mis-behaved on first exposure did not usually get a second chance. overheads: The cost of take-up was perceived as too high. Usually the cost was associated with taking on the philosophies, models or methodologies embodied in and enveloping the visualization elements. Where there were major discrepancies of process between the package and existing practice, or where there was incompatibility between the package and other tools currently in use, the package was typically discarded as likely to fail. It must be considered that these are high-performance teams, with well-established methodologies and work practices. They continually seek tools and methods that augment or extend their practice, but they are reluctant to change work practices (particularly work style) without necessity. lack of insight: Tools that simply re-present available information (e.g., simplistic diagram generation from program text) do not provide any insight. Experts seek facilities that contribute to insight, e.g., useful abstractions, ready juxtapositions, information about otherwise obscure transformations, informed selection of key information, etc. lack of selectivity: Many packages produce output that is too big, too complicated, or undifferentiated. For example, all processes or all data are handled in the same way, and everything is included. Experts want ways of reasoning about artefacts that are ‘too big to fit in one head’; simply repackaging massive textual information into a massive graphical representation is not helpful. They seek tools that can provide a useful focus on things that are ‘too big’, that can make appropriate and meaningful selections for visualization. lack of domain knowledge: Most tools are generic and hence are too low-level. Tools that work from the code, or from the code and some data it operates on, are unlikely to provide selection, abstraction, or insight useful at a design level, because the information most crucial to the software developer – what the program represents, rather than the computer representation of it – is not in the code. At best, the software developer’s intentions might be captured in the comments. As the level of abstraction rises, the tools needed are more specific, they must contain more knowledge of the application domain. Experts want to see software visualized in context – not just what the code does, but what it means. *assimilation:* Sometimes tools that do provide useful insights are set aside after a period of use, because the need for the tool becomes ‘extinct’ when the function (or model or method) the tool embodies is internalized by the software developer. What the software developers seek in visualization tools (selectivity, meaningful chunking or abstractions) – especially in tools for design reasoning – appears to relate directly to their own mental imagery, which emphasizes selectivity, focus, chunking and abstraction. It relates as well to what the literature has to tell us about ways in which human beings deal with massive amounts of information, e.g.: chunking, selection, schema – and to the nature of the differences between experts and novices. They want (and as the next section will indicate, they build) tools that have domain knowledge and can organize visualizations according to *conceptual structure*, rather than physical or programme structure – but that can maintain access to the mapping between the two. For example, the software developers talk about tracking variables, but at a *conceptual* rather than a code level. A ‘conceptual variable’ might encompass many elements in the software, perhaps a number of data streams or a number of buffers composing one thing, with the potential for megabytes of data being referred to as one object. The software developers distinguish between ‘debugging the software’ (i.e., debugging what is written) and ‘debugging the application’ (i.e., debugging the design, what is intended). ### 7. Study 4: The visualizations experts build for themselves So, what sorts of visualization tools do experts build for themselves, and what relationship do they have to experts’ mental imagery? *Study design – opportunistic observation over many years: multiple companies, multiple projects:* Over a number of years, in association with other studies of software development, we asked experts and high-performing teams to show us the visualization tools they built for themselves. In some cases, the demonstrations were volunteered (for example, in the mental imagery study, when one expert in describing a form of imagery said, ‘here I can show you’; or for example in the survey of application use, when some software developers included their own tools in the review of tools they’d tried). Examples were collected to the extent permitted by the informants; access was sensitive because of the proprietary nature of the material examined, and the examples offered below are used with permission. Each example was documented in field notes, accompanied when possible by screen dumps annotated by the developers. The visualization developers were asked to demonstrate the visualization, and they were interviewed about their intentions for and use of the visualization. Again, the analysis was inductive, seeking patterns and lessons across the examples. And, again, the findings were presented to and discussed with the informants, who were asked to scrutinize the accounts and their analysis critically, identifying any inaccuracies. The experts’ own visualizations tended to be designed for a specific context, rather than generic. In one expert’s characterisation of what distinguished his team’s own tool from other packages they had tried: “the home-built tool is closer to the domain and contains domain knowledge”. The tools appeared to fall into two categories, corresponding to the distinction the experts made between ‘debugging the software’ and ‘debugging the application’. Each category is discussed in turn. 1.11 Low-level aspect visualization A typical visualization tool in this class is one team’s ‘schematic browser’ (Figure 1). This program highlighted objects and signal flows with colour. It allowed the user to trace signals across the whole design (i.e., across multiple pages), to move up and down the hierarchy of objects, and to find and relate connections. It moved through the levels of abstraction, relating connections at higher levels to connections lower down through any number of levels and through any number of name changes, for example following one conceptual variable from the top to the bottom of a dozen-deep hierarchy and automatically tracking the name changes at different levels. It allowed the user to examine the value of something on a connection, and to manipulate values, for example altering a value on one connection while monitoring others and hence identifying the connective relationship between different parts. The program embodied domain information, for example having cognizance of the use of given structures, in effect having cognizance of what they represented, of the conceptual objects into which schematic elements were composed. It appears that these tools reflect some aspects of what the imagery presents, but they do not ‘look like’ what the engineers ‘see’ in their minds. There are a number of such tools, especially ones that highlight aspects of circuits or code (e.g., signal flows, variables) or tools for data visualization, as well as tools that represent aspects of complexity or usage patterns. In effect, they visualize things engineers need to take into account in their reasoning, or things they need in order to form correct mental models, rather than depicting particular mental images. 1.12 Conceptual visualization A typical visualization tool in this class is one team’s ‘rubber sheet’ (Figure 2). This program is a visualization of a function used to build digital electronic filters for signals, which normally have a very complex and non-intuitive relationship between the values in the equations and the effect they have on the frequency response of the resulting filter. The tool allows the user to design the frequency response visually by determining the profile of a single line across the ‘rubber sheet’. The user moves peaks and dips in a conforming surface, rather than changing otherwise random-looking values in equations. The altitude one encounters on a walk through the resulting terrain along a particular path determines the final frequency response. The insight comes from the ‘terrain’ context. If one is just moving the points where the peaks and dips are, and can only sees the values along the line, it is hard to see how the values along the line are varied by moving the peaks and dips. However, if one can see the whole terrain, it is easy to comprehend why the peaks and dips have moved the fixed path up and down. Designers do not need to know all this terrain information in order to know what the filter does, but it provides the link between what the filter does and what the designer can control. It appears that these tools can be close to what engineers ‘see’ in their minds. (Indeed, this example was demonstrated as a depiction of one software developer’s personal mental imagery.) As in the example, they often bear strong resemblance to mathematical visualizations or illustrations. The two categories of tool differ not just in their relationship to experts’ mental imagery, but also in how they are used. The low-level aspect visualizations tend to be used to debug the artefact. They pre- suppose that the expert’s understanding of the artefact is correct, and examine the artefact in order to investigate its behaviour. The conceptual visualizations tend to be used to debug the concept or process – to reason about the design. 8. Implications and discussion It has been suggested that the big issues that face software visualization – particularly with respect to design – relate to matching visualizations to human needs. What can we and can we not visualize? Are we visualizing the right things? Currently, it is still arguable that what is visualized is what can be visualized, not necessarily what needs to be visualized. The big technical challenges lie in developing the analysis and selection techniques needed to tailor visualizations to support human cognition. Tools that simply re-present available information (e.g., simplistic diagram generation from program text) do not provide insight. Software developers seek facilities that contribute to insight, e.g., useful abstractions, ready juxtapositions, information about otherwise obscure transformations, informed selection of key information, etc. Visualization developers are still struggling with what Gómez-Henríquez [53] calls the ‘probe effect’: ensuring that the visualization is reliable enough to ensure that what the user sees is what is really happening – whether or not it is what the user needs to see. Underlying the challenge to identify the most appropriate visualization for a given task is the need to reach a well-founded understanding of what makes visualizations appropriate for given tasks. The features observed in expert practitioner behaviour in this domain are consistent with findings in a range of related literatures. What are the implications, and what practical advice can be offered as a result of these literatures? 1.13 Visualizing concepts and intentions vs. re-presenting implementations There are few visualizations yet to support conceptual design. There is a need to provide conceptual visualizations, rather than just re-presentation of the code, performance or data flow. This highlights the need to make available information that is not typically contained in the source code: information about the originators’ intentions and models of the software. This implies that the visualizations (and the tools that drive them) must embody more knowledge of the application domain. As discussed in Section 6, experts want to see software visualized in context – not just what the code does, but what it means. 1.14 Intelligent selection requires domain knowledge The utility of visualization lies not in mere re-presentation of data, but in an appropriate and meaningful distillation and abstraction of the data in order to provide access to desired information about the software. That is, it is no good translating massive source code into an equally massive visualization; what is required is views on the artefact that disclose significant patterns within it. Tudoreanu [50] discusses software visualization in terms of “cognitive economy”: minimizing cognitive load by reducing the amount of information handled by the user and maximizing the information pertinent to the user problem (which is different from reducing complexity related to visual displays). Cognitive economy requires that the visualization be customized for the problem at hand, tailored to the user's task and goals. This, in turn, requires some knowledge of the domain. According to Chi, “…good visualizations are coupled with good analysis algorithms. We can get the most power out of visualization if we use a sophisticated analysis computation that distils the data further from the raw data.”[55] One implication is that generic tools are not selective. Because they do not contain domain knowledge, they cannot depict what the software developers actually reason about when they reason about design. Automatic generation from code is inherently unlikely to produce conceptual visualizations because the code does not contain information about intentions and principles. The extent to which domain knowledge can be encoded is a suitable topic for further research. 1.15 Design visualization versus program visualization The distinction between low-level aspect visualization and conceptual level visualization (in the self-built tools) is also important. At feature level the visualization contributes to the mental imagery rather than reflecting it. At conceptual level, by contrast, it appears that there can be a more direct relationship between the mental imagery and the software visualization. More work is also needed on design visualizations (as opposed to software visualizations) and on the interaction between the two – to what extent does understanding design visualization contribute to solving problems in the domain of program visualization? It is important to remember that there are differences between design visualization and program visualization. Design visualization is a divergent thinking problem in the early stages at least, which requires creativity and readiness to think ‘outside the box’. Schön [48] talks about a design as a ‘holding environment’ for a set of ideas. The importance of fluidity, selectivity, and abstract structure are emphasized by both the experts’ own mental imagery and by their stated requirements for visualization tools. It is little surprise that, in this context, experts conclude that “NOBO [whiteboard]...The best of all tools until the pens dry out. No question.” and “Nothing is as good – and as quick – as pencil and paper.” Program visualization, in contrast, often involves dealing with existing legacy systems, where an important part of the task is reconstructing the design reasoning of previous software developers which led to the system under investigation – this paper contributes little to legacy system comprehension. 9. Conclusion It appears that, in the context of the design and generation of ‘engineering software’, there is sometimes a fairly direct relationship between mental imagery and software visualization – but that it is more often the case that visualizations contribute to rather than reflect mental imagery. It also appears that the externalisation of expert mental imagery can play an important role in the design reasoning of high-performance teams, both through co-ordination of team design discussions, and through embodiment in custom visualization tools. Experts tend not to use available visualization tools, because they do not contribute sufficiently to design reasoning. Their custom visualization tools differ from others in their embodiment of domain knowledge, facilitating investigations at a conceptual level. 10. Acknowledgements This paper was first presented as a keynote address at the Second Program Visualization Workshop, Hornstrup Centret, Denmark, in June 2002. It is produced here with kind permission of the workshop chair, Prof. Mordechi Ben-Ari. The author is profoundly grateful to the expert software developers, without whom the paper would not be possible, and to their companies which permitted access. Thanks are due to colleagues who provided critical commentary, including Alan Blackwell, Peter Eastty, Marc Eisenstadt, Henrik Gedenryd, Simon Holland, William Kentish, Clayton Lewis, Hugh Robinson, Jennifer Rode, and Helen Sharp – as well as the anonymous referees. Special thanks are due to Gordon Rugg, who was instrumental in writing the paper. Some of the observations were conducted under EPSRC grant GR/J48689 (Facilitating Communication across Domains of Engineering). Others were conducted under an EPSRC Advanced Research Fellowship AF/98/0597. The author is a Royal Society Wolfson Research Merit Award Holder. 11. References Figure 1: A ‘schematic browser’ – an example of low-level aspect visualization. Figure 2: The ‘rubber sheet’ – an example of concept visualization.
{"Source-Url": "http://oro.open.ac.uk/19648/1/Petre.pdf", "len_cl100k_base": 12218, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 67467, "total-output-tokens": 16447, "length": "2e13", "weborganizer": {"__label__adult": 0.0003612041473388672, "__label__art_design": 0.0006818771362304688, "__label__crime_law": 0.0002665519714355469, "__label__education_jobs": 0.0018768310546875, "__label__entertainment": 6.538629531860352e-05, "__label__fashion_beauty": 0.00013399124145507812, "__label__finance_business": 0.0002467632293701172, "__label__food_dining": 0.00028705596923828125, "__label__games": 0.0004661083221435547, "__label__hardware": 0.0004611015319824219, "__label__health": 0.0003535747528076172, "__label__history": 0.00020420551300048828, "__label__home_hobbies": 7.420778274536133e-05, "__label__industrial": 0.00024139881134033203, "__label__literature": 0.0004096031188964844, "__label__politics": 0.00021839141845703125, "__label__religion": 0.0003969669342041016, "__label__science_tech": 0.00782012939453125, "__label__social_life": 9.834766387939452e-05, "__label__software": 0.00557708740234375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00023567676544189453, "__label__transportation": 0.00037741661071777344, "__label__travel": 0.00015878677368164062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71791, 0.03722]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71791, 0.70541]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71791, 0.93195]], "google_gemma-3-12b-it_contains_pii": [[0, 697, false], [697, 1719, null], [1719, 2808, null], [2808, 5010, null], [5010, 7650, null], [7650, 10083, null], [10083, 12538, null], [12538, 15166, null], [15166, 17707, null], [17707, 19888, null], [19888, 22144, null], [22144, 24532, null], [24532, 27141, null], [27141, 29454, null], [29454, 31486, null], [31486, 33985, null], [33985, 36463, null], [36463, 38713, null], [38713, 41229, null], [41229, 43552, null], [43552, 45899, null], [45899, 48370, null], [48370, 50826, null], [50826, 53405, null], [53405, 55954, null], [55954, 58281, null], [58281, 60695, null], [60695, 63093, null], [63093, 63708, null], [63708, 65229, null], [65229, 66608, null], [66608, 67957, null], [67957, 69307, null], [69307, 70676, null], [70676, 71644, null], [71644, 71724, null], [71724, 71791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 697, true], [697, 1719, null], [1719, 2808, null], [2808, 5010, null], [5010, 7650, null], [7650, 10083, null], [10083, 12538, null], [12538, 15166, null], [15166, 17707, null], [17707, 19888, null], [19888, 22144, null], [22144, 24532, null], [24532, 27141, null], [27141, 29454, null], [29454, 31486, null], [31486, 33985, null], [33985, 36463, null], [36463, 38713, null], [38713, 41229, null], [41229, 43552, null], [43552, 45899, null], [45899, 48370, null], [48370, 50826, null], [50826, 53405, null], [53405, 55954, null], [55954, 58281, null], [58281, 60695, null], [60695, 63093, null], [63093, 63708, null], [63708, 65229, null], [65229, 66608, null], [66608, 67957, null], [67957, 69307, null], [69307, 70676, null], [70676, 71644, null], [71644, 71724, null], [71724, 71791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71791, null]], "pdf_page_numbers": [[0, 697, 1], [697, 1719, 2], [1719, 2808, 3], [2808, 5010, 4], [5010, 7650, 5], [7650, 10083, 6], [10083, 12538, 7], [12538, 15166, 8], [15166, 17707, 9], [17707, 19888, 10], [19888, 22144, 11], [22144, 24532, 12], [24532, 27141, 13], [27141, 29454, 14], [29454, 31486, 15], [31486, 33985, 16], [33985, 36463, 17], [36463, 38713, 18], [38713, 41229, 19], [41229, 43552, 20], [43552, 45899, 21], [45899, 48370, 22], [48370, 50826, 23], [50826, 53405, 24], [53405, 55954, 25], [55954, 58281, 26], [58281, 60695, 27], [60695, 63093, 28], [63093, 63708, 29], [63708, 65229, 30], [65229, 66608, 31], [66608, 67957, 32], [67957, 69307, 33], [69307, 70676, 34], [70676, 71644, 35], [71644, 71724, 36], [71724, 71791, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71791, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
12ed8e78aa89f797ebd9e3641c4142814d0d83bf
Master’s Thesis Nr. 78 Systems Group, Department of Computer Science, ETH Zurich AIM: A System for Handling Enormous Workloads under Strict Latency and Scalability Regulations by Georgios Gasparis Supervised by Prof. Donald Kossmann Lucas Braun Thomas Etter Martin Kaufmann April 1, 2013 Abstract In recent years a need emerged for executing online processing on huge amounts of data. Companies and organizations keep on aggregating information and attempt to structure their policies according to the statistics they collect. Systems specifications become more and more demanding and time becomes precious. This Master’s Thesis proposes a novel idea for developing a system that can efficiently run real-time processing on streams of incoming events. In addition to events, a set of rules, called campaigns, exist in the system. These rules consist of a number of predicates and should be evaluated against the incoming events in minimal time. We propose a system that is composed of different building blocks, each of which is entitled to a specific operation. Different approaches for these components will be examined in order to achieve maximal performance. We implement such a system on top of Key-Value stores that reside in main memory, because we want to take advantage of the low latency of main memory access. The proposed system aims at managing workloads characterized by extremely high arrival rates by ensuring low latencies and high event rates. Contents 1 Introduction 2 1.1 Motivation 2 1.2 System Overview 2 1.3 Requirements 3 1.4 System Components and Design Space 3 1.5 Related Work 4 1.6 Thesis Structure 5 2 The Analytics Matrix (AM) 6 2.1 AM Attributes 6 2.1.1 Value Type 7 2.1.2 Window Type 7 2.1.3 Example 8 2.2 Implementing the AM 9 2.2.1 Separated Storage 9 2.2.2 Integrated Storage 9 2.3 Updating the AM 11 3 Campaigns 12 3.1 Properties 12 3.1.1 Validity Period 12 3.1.2 Firing Policy 12 3.1.3 Condition 13 3.1.4 Other 13 3.2 Implementation 13 3.2.1 Representation 14 3.2.2 Data Structure 15 4 Campaign Index 16 4.1 Motivation 16 4.2 Building the Index 16 4.2.1 Indexing a Predicate 16 4.2.2 Data Structures 17 4.3 Probing the Index 18 5 Performance Evaluation 21 5.1 Benchmark and Workload 21 5.2 Experimental Setup 22 5.3 Results and Analysis 22 6 Conclusion and Outlook 27 1 Introduction 1.1 Motivation With more than 6 billion active cell phone subscriptions worldwide, an abundance of information is generated day by day. Subscriber-generated events, such as telephone calls and network activity, build an ongoing huge stream of data, from which valuable information can be extracted. This information can be used for example, to determine premium customers, or provide optimal contracts to subscribers. More and more telecommunication companies are eager to design and run their service policies on top of the data they acquire from their own customers. Our work was motivated by a company that is interested in associating subscribers’ activities to prizes by the means of a rule-based evaluation system. The rules of that system involve campaigns and subscribers’ statistics, and include time as an additional dimension. For instance, the company wants to reward people who spend a certain amount of money on phone calls every week, by giving them a discount on their next few calls. The system implementing this functionality must be able to commence its tasks efficiently, and it must be scalable. Moreover, the system must handle high event rates (several thousand events per second), and subscribers should be notified after only a short delay (a few milliseconds). The rigid time constraints on execution and the large time windows for which the data needs to be maintained make the problem even more intriguing. 1.2 System Overview The system we propose is called Analytics in Motion (AIM). AIM is a complex system responsible for carrying out a number of computational tasks. The AIM system consists of the following subsystems: - Stream and Event Processing (SEP) - Real Time Analytics (RTA) The main goal of the SEP system is to match campaigns to subscribers in real time. To do that, it stores information related to campaigns, subscribers and events. A subscriber is a customer of a telecommunication company who incurs a number of events per day. For the sake of simplicity, we assume that an event corresponds to a telephone call, although other types of events like network activity events exist. A campaign consists of a rule, which involves a condition, and a reward. If the condition is fulfilled, the subscriber earns the reward associated with the campaign. The input of the SEP system are the user-generated events. A number of campaigns (typically several hundred) exist in our system at each point in time. Technically, there are a number of tasks that the SEP system must accomplish: 1. It must compute and maintain statistics, such as "number of calls this week" for each subscriber, based on events. 2. It must trigger a campaign whenever its condition is met. The RTA system processes simple, ad-hoc real-time analytical queries, based on the statistics maintained by the SEP system. In addition to these statistics, RTA considers dimension data that is loaded via ETL\(^1\) processes (e.g., region information, etc.). The RTA queries are expressed in a SQL-like manner. As RTA will be built upon the SEP system, it is natural to start with the latter. This is why this thesis focuses on the design and the implementation of the SEP system. ### 1.3 Requirements The key requirements that the SEP system must satisfy are the following: 1. **Latency**: Every event must be processed within 10 milliseconds. That is, subscribers must be notified in real-time whether a campaign matches their activities. 2. **Cost (Machine / Subscriber)**: Each subscriber incurs an average number of events per second (e.g., three telephone calls per day). We would like to minimize the number of machines needed for a given subscriber population. Correspondingly, we would like to maximize the number of subscribers and their events that can be handled by a single machine. 3. **Scalability / Elasticity**: It should be possible to add (or remove) machines dynamically to support a growing (or shrinking) number of subscribers. ### 1.4 System Components and Design Space There are many alternative ways and technology building blocks to carry out the tasks mentioned before. One approach, for instance, is to implement each campaign as a continuous query, the incoming events as a data stream, and then use a data stream management system, such as Esper\(^1\), or Storm\(^2\) as a basis for the SEP system. Our approach is based on a combination of different building blocks. We refer to it as the "Materialized View"\(^2\) approach, and we will study it in the remainder of this thesis. Figure 1 gives an overview of this approach. Here are its most important design points: 1. There is an "Analytics Matrix" (AM) that records the current value of all statistics for every subscriber. This table changes with every event. We label the set of statistics kept for a specific subscriber as his/her "Profile". --- \(^1\)Extract, transform and load process that is mainly used in data warehousing. \(^2\)We borrow this terminology from the relational database world, as the AM component of our approach imitates the behaviour of a view\(^3\), in the sense that it precalculates operations. 2. There is a "Campaign Index" that keeps track of all the active campaigns. This component is updated for each new campaign and for each existing campaign that expired. The campaign index is probed for each event using the updated subscriber profile, in order to find out which campaigns are relevant for the subscriber. ![Diagram of the Materialized View approach] Figure 1: The Materialized View approach 1.5 Related Work There has been a significant amount of work on use cases very much alike to the one presented in this Master's Thesis. This work is broadly known as either rule condition testing, or subscription matching. The main idea involves a system that must test each newly arrived piece of information against a collection of rules to find those matching. A large number of proposals tailored to different principles try to address the problem of evaluating conditions efficiently. These solutions vary from triggers on relational databases to continuous query systems and main memory matching algorithms. Databases offer trigger functionality for checking a condition when an event comes in. Scalability is an issue, as upper bounds in the number of triggers exist. General purpose databases do not account for efficient execution of triggers. Much effort has been put into overcoming these barriers and transforming databases from simple storage layers to active systems being able to capture changes [4, 5]. In the use case we consider, conditions are evaluated against aggregated information computed over events. This necessitates an increased flexibility in storing and updating information that relational database systems do not provide us with. Further aggravating the problem are the time requirements specified in section 1.3, which eventually impose the use of in-memory storages refraining from I/O bottlenecks. Data Stream Management Systems have been around for many years. They follow a more active approach than relational databases, monitoring data feeds and reporting to conditions of interest. Some of them [6, 7] are still at experimental level, and therefore cannot be employed, but there are others like Esper[1], or the recently released Storm[2] that seem promising. We consider the evaluation of such systems as future work. Although their performance might indeed be comparable to what we succeeded in, the fact that no access to internal data structures or data is permitted, is a limiting factor for our use case. Specifically, in section 1.2 we stated that the RTA system depends on the tasks that the SEP system carries out. As a consequence, having no data at hand poses an obstacle in the path of RTA system’s execution. Our proposed system employs memory-based algorithms for manipulating the incoming events and guaranteeing efficient rule checking. A lot of main memory algorithms have been published [8, 9]. The vast majority of them emphasizes tree data structures to index rule predicates [10, 11]. The key idea is that traversing the tree will lead us to potentially matching rules. Apparently, this is advantageous, since not all predicates need to be checked. The "Materialized View" approach belongs to the main memory algorithms category, and assists us in being both fast and generous. 1.6 Thesis Structure The remainder of this thesis is structured as follows. Section 2 describes the Analytics Matrix component mentioned in 1.4. Section 3 details the properties and the implementation of campaigns. Section 4 presents our motivation for the campaign index component, the index building procedure and finally the way we probe the index. In section 5 we illustrate the experimental evaluation of the system. Finally, section 6 concludes the thesis and gives an outlook on future work. --- 3Real-time analytical queries run on the statistics we maintain in the AM. 4Accessing the AM table is feasible in that case. Therefore, no problems for the RTA system arise. 2 The Analytics Matrix (AM) In this section we thoroughly examine the Analytics Matrix (AM) component of the "Materialized View" approach. That is, a table containing one record per subscriber that captures the most recent information of his/hers. This table allows us to precompute aggregations, and to alter their values incrementally minimizing expensive computations during execution. Moreover, an AM record is compounded of a large number of attributes that are statistics computed over a fixed set of metrics described below. 2.1 AM Attributes The described stream and event processing system deals with numerous user-generated events. Each event is composed of a number of attributes such as: caller id, cost and is-long-distance. Based on these attributes, we deduce a meaningful set of metrics on top of which aggregations are computed. The three basic metrics are: - Call - Cost - Duration The first one signifies that a call took place and it always has value 1, as every event corresponds to one call. Obviously, "Cost" points to the amount of money the caller spent on a call, and finally, "Duration" refers to how long the underline event lasted. Besides the aforementioned plain metrics, some other more specialized metrics that take location information (is-long-distance attribute) into account are present. Specifically, in addition to "Call" metric, we also consider "local Call" and "non local Call" metrics. Similarly, we keep track of money spent on local and non local calls ("local Cost" and "non local Cost" metrics respectively). The same pattern holds for Duration as well ("local Duration" and "non local Duration"). In section 1.2 we noted that the SEP system retains information regarding subscribers' event history with the aim to match campaigns to them. This information is a set of statistics consisting of aggregations over time regarding the metrics we defined before. For instance, total money spent (Cost) this week, or longest long distance call (maximum non local Duration) today belong to this set. Therefore, they are of importance as they reflect the up-to-date subscriber profile. These statistics are the attributes that form the Analytics Matrix (AM) component. There are a couple of properties associated to any of these attributes. Moreover, an AM attribute has the following properties: - Value type - Window type The combination of these properties determines the size and the computation complexity of an AM attribute. 2.1.1 Value Type With respect to their value type, AM attributes can be divided into two categories: - Selective (longest call, highest cost, minimum duration, etc.) - Accumulative (number of calls, average cost, total duration, etc.) This distinction is based on the way the value of an AM attribute is calculated. In other words, it originates from the aggregation function applying on the metrics. The aggregation functions: min and max belong to the Selective value type, whereas the sum and the average functions are part of the Accumulative value type. We would like to state here that for the average aggregation, in contrast to the other functions, we maintain one more value; we need one value recording the current sum of the values and a counter of the observations occurred. We opt not to use another place for storing the actual average value, but to compute it on demand with the aim to keep space consumption as low as possible. 2.1.2 Window Type AM attributes are also characterized by a window type. Furthermore, we pair an AM attribute value with time information showing the time interval within which the value is valid. For this purpose we use the timestamp attribute of the subscriber's last event, which we store into the AM record. This timestamp indicates when a call event starts. Time windows slide as new events arrive at the system; old values are squeezed out and fresh values replace them. We consider three possible window types: - Tumbling (e.g. last week) - Stepwise (e.g. last four days with step equals to two days) - Continuously moving (e.g. last 24 hours) As far as the Tumbling window is concerned, we would like to point out that this type is a special case of the stepwise type, in which the window moves by step equal to its size. Tumbling window is the simplest type in terms of space consumption and computational effort. Regarding the former, one value is sufficient for recording all the aggregation types, but the average (we need two values). Given a newly arrived event we must decide whether this event belongs to the same time window as the previous event of this subscriber. If so, we update the corresponding value. Alternatively, we have to reset the value. Stepwise windows are defined by their size and step, that is, the slide size. We assume that the window size is dividable by the window step. For example, we cannot think of any meaningful use case for a window having a size of 6 days and a step of 4 days. This assumption results in saving space, as we have to store size/step values instead of size values. Again here, the average aggregation type needs more space because size/step counters must be kept additionally to the values. Similarly to the previous situation, if an event arrives at the system, we firstly calculate whether the previous existing window is still active, or a new window has displaced it. In the first case, we simply modify the value accordingly, whereas in the latter, we have to compute how many steps the currently active window has been slid from the previous one. This number indicates how many of the values we maintain are stale, and hence, should be reset. Recording the fresh value is the last step of the update procedure. The Continuously moving window is the most complicated window type. What is more, it is not resettable as it is always active. This window moves constantly, information is added incrementally and it is removed continuously. The space needed for the Continuous window cannot be computed in advance; something that is doable for the other two cases mentioned above. Each one of the three window types is associated to size information denoting how long the window lasts. Initialization information specifying the start boundary of the very first window becoming active is also needed. Based on this fundamental information we are able to calculate the start, and as a result, the end point of any later window. Moreover, the moment for resetting an attribute's value is needed in the first two window types. For the sake of simplicity, we take for granted that a window always starts at 00:00 time and it ends at 23:59. Let us consider a tumbling window with size of one week. In this instance, the window starts every Monday at 00:00 and it ends the following Sunday at 23:59. 2.1.3 Example As of now, our system supports all the different value types for the tumbling window type. Table 1 illustrates an instance of the Analytics Matrix in the system. One can see that the table hosts one record per subscriber. Also, the timestamp we use for calculating the active window for each AM attribute is depicted. A number of attributes having tumbling window type and different value types is presented. The "Calls this week" attribute has accumulative value type (sum), whereas the second statistic we maintain (Max local Duration today) has selective value type (max). Finally, the "Avg Cost today" has accumulative value type (average). It is clearly shown that for the average aggregation we store two values: a running sum and a counter of the observations. <table> <thead> <tr> <th>Subscriber</th> <th>Timestamp</th> <th>Calls this week</th> <th>Max Duration of local calls today</th> <th>Avg Cost today</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1358938791</td> <td>3</td> <td>20 mins</td> <td>$1.4</td> </tr> <tr> <td>2</td> <td>1362948694</td> <td>7</td> <td>53 mins</td> <td>$9.1</td> </tr> </tbody> </table> Table 1: An instance of the Analytics Matrix 2.2 Implementing the AM Two dimensions have been considered with regard to the implementation of the AM. The first dimension refers to the separation, or integration of storage layer and event processing logic. To be more specific, we can either make the entire AM table visible to all the application servers, or split the AM table up into smaller parts and designate an app server to each one of these parts. Both approaches have advantages and disadvantages that are studied below. The second dimension is related to the technology we employ for the physical store of the AM table. Two alternatives exist: Relational Database Management Systems (RDBMS) and Key-Value stores. 2.2.1 Separated Storage In the Separated Storage (figure 2) we opt to detach the storage layer from the event processing cycle. One can consider this approach as a shared-disk architecture[12], where the AM table lies in a shared resource. Here, an application server is responsible for storing campaigns along with the campaign index. Additionally, an event may be routed to any available app server. All machines are able to process events of all subscribers as they see the entire AM table. The fact that a new layer (storage layer) is introduced, leads to an increased latency. The ability to scale out/down and to ensure fault-tolerance compensates for the aforementioned overhead. Since all app servers are granted access to the AM, starting a new app server only entails reading the campaigns and building the campaign index. Moreover, no pivotal problem arises in case of an app server failure, since the index is fully reconstructible and the subscribers' history is kept on an external machine. The technology we have opted for storing the AM is RamCloud[13]. RamCloud is a scalable high-performance key-value store and it is featured by appreciable low latencies. It is optimized to run in an Infiniband network[17]. Additionally, it allows for conditional writings, which are of utmost importance for this project. In lay terms, a write is applied if and only if a specified condition is fulfilled (the value we retrieved has not been modified in the meantime). We have left the examination of fault-tolerance out of the picture; we will deeply study it later. 2.2.2 Integrated Storage In the Integrated Storage (figure 3), on the other hand, every application server acts individually (like in a shared-nothing architecture[12]), since it not only stores campaigns and the campaign index, but it also maintains its own part of the AM. Moreover, each application server processes events belonging to a fixed set of subscribers. Particularly, all the events of a specific subscriber are always forwarded to the same server being responsible for maintaining his/her profile. --- 5Horizontal partition based on the subscribers' identifiers. 6Network latency dominates. For this approach we have decided again to experiment with in-memory hash tables: local hash table\(^7\) as well as Memcached[14]. After running some --- \(^7\)Lock-free, thread-safe hash table implemented in C++. microbenchmarks we decided to exclude Memcached, as the latencies we acquired were far beyond the specified limits. Given that tuning Memcached’s performance is not part of this thesis, we stick to the local hash table. As a matter of fact, scaling the system and implementing fault tolerance cannot be done off-the-shelf. Scalability requires re-organizing the subscribers group of each server, while for fault-tolerance some source of backup must be taken into account. Modelling and implementing system’s fault-tolerance is not in the scope of this thesis. 2.3 Updating the AM In this section we illustrate the procedure we follow for updating the AM, in case an event arrives at the SEP system. Retrieving the record of the subscriber that generated the event is the very first step of algorithm 1. Then, we update all the attributes forming the AM record one after the other. For doing so, we make use of the current event’s attributes value as well as information linked to every AM attribute. The properties (value and window type) characterizing an attribute are essential for its proper update. At that point, the updated AM record of the underline subscriber is held. Following algorithm 1, we try to write the newly updated record back to the storage layer\(^8\). We use a conditional write for this operation, which returns ‘false’ if the record was modified by another thread/process in the meantime. This way we assure that updates are performed only on the most up-to-date subscriber’s record. If a write failure occurs, we repeat the entire procedure. Otherwise, the algorithm terminates returning the record. \textbf{Algorithm 1} Update the Analytics Matrix \begin{verbatim} function UpdateAM(Event) record ← storage.read(event.callerId) for each attribute in: AM record do attribute.update(event) end for result ← storage.write(event.callerId, record) if result = false then goto2 end if return record end function \end{verbatim} \(^8\)RamCloud and local hash table can be interchangeably employed for maintaining the AM. Both engines implement the same API. 3 Campaigns In this section the reader finds a detailed description of campaigns. As mentioned before, a campaign is defined by a pair (rule, reward), and it is only activated when a subscriber profile meets the condition implementing the rule. 3.1 Properties Every Campaign is characterised by a set of properties we depict in figure 4. A comprehensive description of all these properties follows. ![Campaign Properties Diagram] Figure 4: Campaign properties 3.1.1 Validity Period Validity Period corresponds to the time interval within which the campaign is active. A range [valid from, valid to]\(^9\) suffices for declaring this kind of information. 3.1.2 Firing Policy This property indicates whether a campaign can fire multiple times (no firing restrictions), or there exist constraints enforcing it to fire once within a time period. Moreover, two sub-properties compose the Firing Policy property. The first one is called Firing Interval and defines the interval in which a campaign can fire (e.g. \(0\) = always, 1 day, 1 week, etc.), whereas the second (named Firing Start Condition) declares if this interval moves based on a fixed or a sliding manner. The following examples demonstrate how these sub-properties affect a campaign's behavior: - **If** Firing Interval: 1 day and Firing Start Condition: fixed **then** campaign can fire each day only once between 00:00 and 23:59 - **If** Firing Interval: 1 day and Firing Start Condition: sliding **then** campaign can fire if it did not fire within the past 24 hours \(^9\)Both bounds have Timestamp (combined Date and Time information) data type. Firing Policy property has only been modelled and not yet implemented. For the time being, we assume that all the campaigns have no firing restrictions, meaning that they fire as long as their condition is fulfilled. 3.1.3 Condition This property denotes what has to be fulfilled for a campaign to fire. The Condition can be considered as a boolean formula consisting of predicates on AM attributes, event attributes and static subscriber data. The following examples give the reader an idea of different Conditions: - \([\text{number of calls today } \geq 2] \text{ OR } [\text{longest call this week } > 100 \text{ minutes AND event.place-of-caller like "ZUR%"]}\] - \([\text{number of long-distance calls this week } > 5 \text{ AND total-cost today } > 4] \text{ OR } [\text{number of local calls this week } > 20]\] The Condition property will be thoroughly explored in Section 3.2. There, the reader can find examples illustrating the way we have modelled this property as well as the functionality we support in the current version of the system. 3.1.4 Other Other corresponds to a category rather than a single property. It wraps properties of campaigns that are not taken into consideration in this thesis. We have stumbled upon two of these properties: - Action - AM attributes initialization The former refers to the consequence of a campaign that fired. For instance the subscriber can get a discount of 10% for the next call. Regarding the latter, different options exist. We can either employ the "historic", or the "fixed" initialization. Based on the "historic", every time a new campaign is registered to the system the values of the AM attributes are decided using the subscriber's history. On the contrary, if "fixed" initialization has been chosen, default values are assigned to the attributes, e.g. we set "Cost this week" to 0. 3.2 Implementation In this section, the condition property, which apparently constitutes the most fundamental part of a campaign, is examined in detail. As explained in 3.1.3 this property determines under which circumstances a campaign can fire. The way we model condition and the data structures we make use of are studied in 3.2.1 and 3.2.2 respectively. 3.2.1 Representation Having in mind that the evaluation of a campaign condition should be immediate, we choose to model it as a tree-like structure being in Disjunctive Normal Form\(^{10}\) (DNF) \([15]\). What is more, at the root of this tree we place the OR clause, which can have one or more children. At depth one we have AND clauses with one or more children too. Finally, leaf nodes are designated to store predicates. As a result, a campaign condition is formed of a number of Conjuncts (i.e. AND clauses) joined with OR clauses, and each Conjunct accommodates a number of predicates. At that point let us explain how a predicate looks like. A Predicate is defined by its three children and denotes a comparison between the Attribute and the Constant: - Attribute: AM attribute or current event attribute - Operator: $<, \leq, =, \geq, >, \text{LIKE}^{11}$ - Constant The following campaign condition example, along with figure 5 provides a better understanding of our condition modelling. Condition: $[\text{number of calls per week} > 1 \text{ AND total money spent per day} \geq \$30] \text{ OR } [\text{number of non local calls per month} > 2]$ ![Figure 5: Tree representation of a Campaign Condition](image) Considering Condition as a statement in DNF aids us in speeding up the evaluation procedure. To be more precise, given the fact that this statement is --- \(^{10}\)We do not allow for negations in our DNF modelling. \(^{11}\)Currently, we only support the operators \{$<, \leq, =, \geq, >$\}. a disjunction of conjunctions, the first time a Conjunct evaluates to true\textsuperscript{12} we can exit evaluation, as the campaign condition has already been met. This way we need not examine all the other Conjuncts. Additionally, as we mentioned before, a Conjunct is a sequence of Predicates combined with AND clauses, and thus, Conjunct evaluation aborts when a Predicate’s comparison results in false. In addition to Predicates, a Conjunct also stores the identifier of the campaign it belongs to. This way when a Conjunct evaluates to true, we know which campaign has been triggered. 3.2.2 Data Structure In section 3.2.1 we elaborated on the reason we follow the DNF representation for the campaign condition. According to this modelling, a condition can be considered as a list of arrays, where every array stands for a conjunct, and it contains an arbitrary number of predicates. We assume that a predicate may belong to more than one conjuncts, and as a consequence to multiple campaigns. Because of this, predicates are represented globally in the system. In other words, we maintain an array containing all the predicates existing in a deployment of the system. Therefore, every conjunct holds references to predicates, rather than actual predicates. Figure 6 provides a thorough description of this global representation for two conditions having a number of predicates in common to check. There, one can easily notice that conditions have the shape of lists, and that they share predicates (i.e. predicates with identifiers: 3, 4). \textsuperscript{12}A Conjunct evaluates to true when all its Predicates result in true. 4 Campaign Index The second crucial component of the "Materialized View" approach is the so-called Campaign Index. This index is campaign-driven, meaning that it does not exist prior to campaigns and it is built with the aim to facilitate their evaluation. Ideally, Campaign Index will assist us in finding matching campaigns in $O(\log(n))$, rather than $O(n)$, where $n$ is the number of campaigns. 4.1 Motivation The Stream and Event Processing system this thesis describes resembles a publish/subscribe system as it manages a stream of incoming user-generated events and a set of rules that must be evaluated against this stream. As [9] proposes, grouping subscription predicates based on their common attributes is advantageous, since many subscription predicates can be evaluated at the same time. We assume that there exist AM attributes (e.g. number of calls this week, total cost today, etc.) that appear in conjuncts, and thus in campaigns very frequently. Moreover, the assumption that only a few of these attributes exist in the system is made. We name these few attributes Entry Attributes and the predicates involving them Entry Predicates. Following the idea of [9], we believe that building indexes (called Entry Indexes) on Entry Predicates is beneficial for the campaign evaluation procedure, since these indexes function as a filter pruning the campaign space. Regarding the selection of Entry Predicates, we would like to point out that finding a set that is minimal and at the same time it covers as many conjuncts as possible is not trivial. One option is to fix the number of Entry Attributes and then try to find the best coverage. That is, locate the desired number of AM attributes participating in the most conjuncts. A different way to come up with a set of Entry Attributes is to specify the percentage of conjuncts that must be covered (i.e. contain an Entry Attribute) and afterwards find a minimal set that reaches the specified coverage. For the time being, we simply define this set in advance and we build our campaigns in such a way that most of their conjuncts contain Entry Attributes. 4.2 Building the Index The following sections clarify the notion of indexing a predicate and present the data structures we have thought about, respectively. 4.2.1 Indexing a Predicate In section 3.2.1 we mentioned that a predicate is a triple consisting of an AM attribute, a constant and an operator $\{<, \leq, =, \geq, >, \text{LIKE}\}$. Consequently, different kinds of predicates (range, equality and LIKE predicates) exist in the system. Nevertheless, as of this moment, we only consider range predicates\textsuperscript{13}. \textsuperscript{13}We will study the other types of predicates in a future phase. Dealing with range predicates enables us to use an ordered tree data structure being able to hold intervals. For this reason we decide to use 1-dimensional R-trees (Interval Trees) [10] for indexing entry predicates appearing in the system. This data structure allows one to efficiently find all intervals that overlap with any given value. As far as the R-tree’s implementation is concerned, we use a cache-conscious, extremely space efficient, packed, static, 1-dimensional R-Tree provided by the Crescando[16] team. Indexing a predicate boils down to finding the interval for which the range predicate results in true, and to inserting it, along with the respective value into the entry index. More precisely, we use the interval’s bounds as the key of the record we index. Furthermore, the value being associated to this compound key ([lower bound, upper bound]) is the conjunct containing this predicate (recall that every predicate is part of a conjunct). Bearing in mind that a predicate may belong to multiple conjuncts, we opt to store a list of conjuncts instead of a single conjunct per key. So, given a value an entry index reports all the lists of conjuncts whose keys overlap with this value. 4.2.2 Data Structures The proposed Campaign Index is composed of two parts: - Entry Indexes - Unindexed Conjuncts Regarding the former, we maintain an array of entry indexes having as many places as the number of entry attributes in the system\textsuperscript{14}. Each entry index is dedicated to one entry attribute and it is responsible for indexing (as described in section 4.2.1) all predicates containing this attribute. The Unindexed Conjuncts part refers to an array of conjuncts, where we store conjuncts involving non entry predicates. This conveys that a conjunct is either placed at a leaf of an entry index, or it exists in the Unindexed Conjuncts. During campaign index building a campaign is decomposed into its conjuncts and every conjunct is in turn broken down to its predicates. In case of an entry predicate, we extract the matching interval (the interval for which the predicate’s comparison is true), and after pairing it with the conjunct this predicate is part of, we insert them into the entry index. Otherwise, the conjunct is pushed back to the Unindexed Conjuncts. For the scope of this Thesis we consider the Campaign Index component as immutable, in the sense that it is built offline (before the system starts serving events) and no extra campaigns can be added, or existing campaigns can be removed\textsuperscript{15}. A complete example showing a snapshot of the Campaign Index for a set of campaigns is presented in Figure 7. With the aim to keep the example simple, we choose to have only one entry attribute (i.e. attribute x), and hence, one \textsuperscript{14}We do not build indexes for entry attributes being involved in no predicates. \textsuperscript{15}The possibility of adding campaigns dynamically will be introduced at a later stage. entry index (built on top of attribute x). The reader can observe how the contents of this entry index are structured. To be more specific, there are three entry predicates (P1, P2 and P3) participating in two, one and one conjuncts, respectively. Furthermore, we have two unindexed conjuncts with one predicate each in that set-up. 4.3 Probing the Index This section presents the algorithm (i.e. algorithm 2) that is used for finding relevant campaigns that potentially fire as a result of an event. The illustrated function is invoked for every incoming event and it takes the updated subscriber profile (AM record) as an argument. Additionally, the function returns the identifiers of the activated campaigns. One can easily observe that Algorithm 2 comprises the following two steps: 1. Find Candidate Conjuncts 2. Evaluate Candidate Conjuncts Firstly, all entry indexes are probed using the AM record (profile) of the subscriber that incurred the event. Specifically, every single index is queried with the value of the AM attribute this index is built upon. This way conjuncts including entry predicates that do not match are ruled out without further examination\(^{16}\). By the end of Step 1 a list containing the Candidate Conjuncts is \(^{16}\)In case of an unsatisfied predicate, the conjunct holding it is not satisfied too. obtained. A conjunct holding an entry predicate is labeled as candidate when its entry predicate matches. Moreover, since we have no information with regard to unindexed conjuncts, they are always part of Candidate Conjuncts. In the second step of the algorithm we examine all the Candidate Conjuncts consecutively to find out which of them evaluate to true. The outcome of this procedure is the matching campaigns. Algorithm 2 Find Matching Campaigns ``` 1: function MatchCampaigns(Profile) 2: CandidateConjuncts ← Unindexed Conjuncts 3: for each index in Entry Indexes do 4: CandidateConjuncts ← index.getCandConjs(Profile) 5: end for 6: Matching ← ∅ 7: for each conjunct in: Candidate Conjuncts do 8: result ← conjunct.evaluate(Profile) 9: if result = true then 10: Matching.append(conjunct.CampaignId) 11: end if 12: end for 13: return Matching 14: end function ``` Figures 8 and 9 visualize the first and second part of the Matching Algorithm. In the former the set of Candidate Conjuncts is acquired (i.e. dotted rectangles), whereas in the latter their evaluation takes place. In Figure 8 we probe the entry index with the value $x = 3$. What is more, two entry predicates are satisfied with this input value resulting in three conjuncts that must be further examined. Clearly, unindexed conjuncts must also be evaluated. Therefore, we should check five conjuncts in total. Given the values of the other AM attributes and what rules the predicates specify we end up with two conjuncts evaluating to true. In the end, we report the campaigns with identifiers 2, 3. \footnote{Recall that if a conjunct is true, the campaign including it triggers.} Figure 8: Find Candidate Conjuncts Figure 9: Evaluate Candidate Conjuncts 5 Performance Evaluation In this section we evaluate the performance of our Stream and Event Processing system and compare the alternative architectures (Separated Storage, Integrated Storage). Section 5.1 describes the benchmark and the workload the system is subject to. Section 5.2 details the system’s setup. Finally, Section 5.3 presents the experiments we conducted along with their analysis. 5.1 Benchmark and Workload We have implemented a workload generator that creates a number of Analytics Matrix attributes and a set of campaigns according to a workload specification. We store both the attributes and the campaigns into a MySQL\cite{19} database (named meta-database) which is loaded at start time and based on this information we set up the data structures being used during the execution of the program. All attributes in the AM have tumbling window type with size of either one day, or one week. Moreover, AM attributes can take all the possible value types (accumulative, selective). Applying all different combinations of window size, metric and aggregation function results in a set of 54 distinct attributes\textsuperscript{18}. Attributes such as: - Number of local calls today - Cost of long distance calls this week exist in a deployment of the system. In addition to the AM attributes, the benchmark generates a number of campaigns\textsuperscript{19}. The campaigns we are experimenting with are constructed using multiple conjunctions/disjunctions on AM attributes. An example of a campaign condition that could exist in the system is: \[ \text{[avg(local cost) > 27.0 ∧ sum(local duration) < 62]} \lor \text{[avg(total cost) > 71.0 ∧ max(long distance duration) > 40 ∧ number of calls > 10]} \] Conditions are generated in such a way that each conjunct most probably (i.e. with probability 90%) contains a pivot attribute. A popular AM attribute occurring in many campaigns is called Pivot. Our proposed campaign index is constructed from scratch by reading the meta-database. This operation is not performance critical, because it takes place before we start serving events. The number of entry indexes equals the number of Pivot attributes, while the number of unindexed conjuncts exclusively depends on the number of campaigns. As far as the workload is concerned, we assume uniform distribution of events over subscribers. In addition, every subscriber incurs 28.8 billing events per day. An event is composed of a set of attributes such as caller-id, is-long-distance, duration, cost, etc. Due to the huge number of events, we choose \textsuperscript{18}Not all possible combinations make sense, e.g. \text{max(number of calls)}. \textsuperscript{19}A campaign comprises a number of properties as described in 3.1, but for now we only consider its condition. to create and emit events on the fly, rather than reading and storing them in main memory\textsuperscript{20}. Every thread running on the application server is not only responsible for handling an event, but also for generating it. As a result, no network is involved for an event to enter the system. 5.2 Experimental Setup We ran all experiments on workstations having two CPU sockets with 4 cores at 2.40GHz per socket, 128GB RAM operating under Linux and communicating via InfiniBand. The number of AM attributes remains constant (i.e. 486 attributes) throughout the experimental session. This number corresponds to AM records with size of approximately 3KB. To simulate this large number of attributes, we keep each of the 54 unique attributes mentioned before several times (i.e. 9 times). Moreover, 20 pivot AM attributes exist in the system entailing the existence of 20 entry indexes in the campaign index. If it is not stated differently, 300 campaigns are active in the system. Each one of these campaigns has at most five conjuncts and every conjunct in turn consists of at most five predicates. In this series of experiments we measure the throughput (events/s) and the average response time of the system. We also measure the quality of service, that is, the percentage of events that fulfil the specified time requirements (i.e. latency \(< 10\text{ms}\)). Timings are taken in microseconds and they are presented in milliseconds. Response time captures the whole event cycle (i.e. event generation, AM record update and campaign evaluation). Regarding experiments duration, we want to point out that all of them ran for 1 hour. During this time period, we first populate the database (RamCloud or local hash table) with a default record\textsuperscript{21} per subscriber, and then the system starts serving events. For all the experiments, but the scalability, one application server is used. As far as the storage layer is concerned, the separated storage approach occupies 3 RamCloud servers. The integrated storage, on the contrary, needs no RamCloud server, as it uses a local hash table. In the beginning of this section we noted that our workstations have 8 CPU cores. With the aim to avoid contention between threads and to keep latency low, we select to bind each app server’s thread to a CPU core. Additionally, for the separated storage, 7 threads\textsuperscript{22} in a closed system\textsuperscript{18} are running simultaneously, while for the integrated storage all the 8 threads can be exploited. 5.3 Results and Analysis The first experiment evaluates the performance of the system in regard to the number of subscribers. As mentioned in 1.3, one of our goals is to maximize the \textsuperscript{20}Differently, memory consumption would become an important issue. \textsuperscript{21}This record consists of the default value for every AM attribute (i.e. Cost = 0). \textsuperscript{22}One additional thread handles communication with RamCloud. number of subscribers a machine can handle. The following two graphs illustrate the numbers we attained for throughput and response time respectively. As far as Figure 10 is concerned, throughput remains essentially the same as the number of subscribers increases. Analogously, response time (Figure 11) does not change significantly with the subscribers number. The reason for this roughly constant behaviour is that the number of subscribers do not alter the load that is put to the application server. Specifically, the event arrival rate depends only on the number of threads issuing events. Since, this number does not change during execution, the load arriving to the system does not change either. One can notice that the integrated storage approach outperforms the separated storage one. The fact that a machine demands more CPU cycles to issue an external request to RamCloud, than to the local hash table, explains this difference in performance. In conclusion, both architectures comply with response time and throughput requirements as specified in 1.3, and therefore both can be employed. Figure 10: Throughput Comparison Figure 11: Response Time Comparison Figure 12 gives the reader an idea of our service quality subject to the number of subscribers. In other words, it demonstrates the percentage of requests that were completed within the specified time limits (i.e. 10ms). Apparently, more than 99.9% of the events issued were answered in time, verifying that adopting any of the two architectures is feasible. The next two experiments (Figure 13 and 14) depict how the system scales with the number of application servers, across all alternatives. Given that for the integrated storage approach each app server acts individually, adding a new machine does not impact on the performance of the existing machines. This explains why we get linear scale out for this architecture. In the separated storage approach, on the other hand, performance loss appears, due to the fact that every additional machine consumes shared resources, and hence affects other machines' performance. The performance gain of the integrated storage is 23In the integrated storage, an app server maintains part of the AM. Given that a machine has 128GB RAM and an AM record has a size of 3KB, it becomes clear why we cannot handle more than 30 million subscribers, and that is why we have missing value. architecture derives again from the little computational effort a request to the local hash table needs. Figures 12 and 13 show that both architectures are sensitive to the number of campaigns. Clearly, these results follow our natural intuition. The more computational work the system must carry out, the longer an event service lasts. Consequently, throughput decreases with the number of campaigns, and correspondingly response time increases. Being interested in comparing our alternatives to a baseline, we consider a system configuration in which no entry indexes exist. So, in that case, our campaign index comprises only the unindexed conjuncts component. Regarding the probing procedure, all conjuncts must be evaluated one after the other for finding the matching campaigns. This brute force approach eliminates the advantage that entry indexes give to the system, as no conjunct can be excluded... from evaluation\textsuperscript{24}. Tables 2, 3 compare the throughput we measured for the different configurations subject to the number of campaigns. The last column (i.e. Diff) of both tables shows how much better than having no index, the proposed campaign index performs. ![Figure 15: Throughput Comparison](image1) ![Figure 16: Response Time Comparison](image2) Table 2: Throughput (events/s) Comparison for the Separated Storage <table> <thead> <tr> <th>Campaigns</th> <th>Separated with Index</th> <th>Separated no Index</th> <th>Diff [%]</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>60,392</td> <td>61,387</td> <td>-1.62</td> </tr> <tr> <td>200</td> <td>56,747</td> <td>57,496</td> <td>-1.30</td> </tr> <tr> <td>300</td> <td>51,710</td> <td>51,438</td> <td>0.53</td> </tr> <tr> <td>1,000</td> <td>36,473</td> <td>37,108</td> <td>-1.71</td> </tr> <tr> <td>10,000</td> <td>8,050</td> <td>5,244</td> <td>53.49</td> </tr> </tbody> </table> Table 3: Throughput (events/s) Comparison for the Integrated Storage <table> <thead> <tr> <th>Campaigns</th> <th>Integrated with Index</th> <th>Integrated no Index</th> <th>Diff [%]</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>258,268</td> <td>269,479</td> <td>-4.16</td> </tr> <tr> <td>200</td> <td>183,829</td> <td>195,160</td> <td>-5.81</td> </tr> <tr> <td>300</td> <td>147,353</td> <td>152,105</td> <td>-3.12</td> </tr> <tr> <td>1,000</td> <td>58,988</td> <td>57,889</td> <td>1.89</td> </tr> <tr> <td>10,000</td> <td>9,418</td> <td>5,802</td> <td>62.31</td> </tr> </tbody> </table> \textsuperscript{24}Conversely, in case of entry indexes, we only evaluate conjuncts whose entry predicate matches. Furthermore, Figure 17 compares average response times for the various configurations. It turns out that for small number of campaigns (up to 1,000) the brute force approach performs slightly better than the one with the entry indexes, but from a certain point onwards the campaign index pays off (up to 62.31% better for the Integrated Storage architecture). This performance difference (for large number of campaigns) is explained by the fact that the system is obliged to evaluate all the existing conjuncts for the brute force configuration, while, in the case of campaign index, a significant number of them is not examined. For the use case\textsuperscript{25} that inspired this thesis it seems that evaluating all campaigns sequentially is sufficient (since we deal with a relatively small number of campaigns) and no extra space consumption (for building the entry indexes) is necessary. Nevertheless, other use cases, where the number of campaigns/rules/subscriptions is considerably larger (in the order of magnitude of tens or hundreds of thousands) than in the underline use case, lead us towards the campaign index direction. ![Figure 17: Response Time Comparison](image-url) \textsuperscript{25}A system encompassing a relatively small number (for instance 300) of campaigns that should be evaluated against a stream of user generated events. 6 Conclusion and Outlook In this document we proposed a system that can efficiently execute real-time processing tasks on a stream of incoming events. The idea arose from a telecommunication company being interested in managing subscriber-generated events, with the aim to extract valuable information out of them. Our system, named SEP, is responsible for maintaining aggregated information (statistics) deduced from subscribers billing events, and matching this source of information to a number of rules residing in it. We have optimized the mechanism for maintaining the statistics in such a way that information is added incrementally without the need for revising a subscriber’s history (past events). As far as the matching task is concerned, SEP implements a memory-based algorithm relying on tree data structures that allow for quick examination of rules. In addition to the theoretical analysis, a set of experiments depicting the performance of the SEP system was presented. Moreover, the experimental results act as a proof-of-concept for the feasibility of our architecture. Specifically, the "Materialized View" approach that SEP follows ensures a high throughput (tens of thousands of events per second) and an extremely low response time. While working on this project a lot of ideas have been arising with respect to future developments. Some of them are in the direction of introducing new functionality to the existing system (for example, other window types), while others propose alternative ways to address the entire problem. Clearly, the very first thought coming to our minds involves replacing the "Materialized View" approach studied in this thesis with a data stream management system. This decision entails implementing every campaign as a continuous query and modelling events as a data stream. It is still debatable whether the ease of using a stream engine compensates for the lack of memory management control. Stream engines, such as Esper or Storm, provide users with out-of-the-box solutions, but their support comes at a price. No access to internal state and data is allowed. Migrating the event generation task from the application server to a client layer would be an important improvement to the current system. This amendment will give us the opportunity to measure the real time\textsuperscript{26} SEP needs for serving an incoming event, and it will also aid us towards separation of concerns. Introducing more features to the current SEP system is something we are also aiming at. In this direction there is still a lot of room for improvement. Moreover, in our view, implementing campaign properties (section 3.1) is a high priority goal, since it would bring us a step closer to realistic use cases. Adding constraints would undoubtedly prolong the campaign evaluation time, and would also require auxiliary data structures (e.g. maintain one more table in RamCloud). Other complications should also be taken into account. Matching a campaign condition will not necessarily indicate that a campaign has been triggered. Firing restrictions should also be examined before we report that a \textsuperscript{26}Measurements will include receiving a request and sending back the response via network (TCP/IP). campaign matches. Further features we could consider are: - implementing more complicated window types (i.e. stepwise window) - making the system more dynamic by allowing for new campaigns to be registered, or existing campaigns to be deleted during execution - enhancing the dynamic nature of the system by modifying (inserting or deleting) the set of AM attributes while the system is up and running - maintaining subscriber information and using it in campaigns (e.g. LIKE operations) Going one step further, from our point of view, the ultimate goal is the combination of the SEP and RTA\(^\text{27}\) (section 1.2) systems with the aim to build the full-fledged AIM system. This system should comply with the AIM specification and it should be able to deal with the workloads of both sub-systems. We believe that the system we came up with, lays the groundwork for more complex systems sharing the same characteristics as the ones presented in this thesis. Namely, systems that are fed with intense workloads and must apply simple, yet not trivial, computations on extremely large amounts of data by ensuring high event rates and low latencies. As data has grown in size and complexity, and time restrictions have become stricter new solutions must be employed for dealing with these conditions. What we consider as the most important contributions of this thesis, is the way we treat aggregations of information, and the abstractions we follow for modelling the rules. These general principles can be used for shifting through volumes of data to produce knowledge and detect patterns. \(^{27}\text{Upon completion of the Real-Time Analytics system.}\) References
{"Source-Url": "https://www.research-collection.ethz.ch/bitstream/handle/20.500.11850/153933/eth-6874-01.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 12522, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 83077, "total-output-tokens": 15019, "length": "2e13", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.0005407333374023438, "__label__crime_law": 0.00038313865661621094, "__label__education_jobs": 0.0037326812744140625, "__label__entertainment": 0.0001348257064819336, "__label__fashion_beauty": 0.00020635128021240232, "__label__finance_business": 0.0010280609130859375, "__label__food_dining": 0.0003991127014160156, "__label__games": 0.0006728172302246094, "__label__hardware": 0.00377655029296875, "__label__health": 0.0006847381591796875, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.00016009807586669922, "__label__industrial": 0.0007882118225097656, "__label__literature": 0.0003981590270996094, "__label__politics": 0.00038743019104003906, "__label__religion": 0.0004570484161376953, "__label__science_tech": 0.293212890625, "__label__social_life": 0.00010883808135986328, "__label__software": 0.0201873779296875, "__label__software_dev": 0.6708984375, "__label__sports_fitness": 0.00021839141845703125, "__label__transportation": 0.0006399154663085938, "__label__travel": 0.0002086162567138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60349, 0.04816]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60349, 0.22202]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60349, 0.91514]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 292, false], [292, 292, null], [292, 1468, null], [1468, 1468, null], [1468, 2446, null], [2446, 2446, null], [2446, 5111, null], [5111, 7594, null], [7594, 9164, null], [9164, 11530, null], [11530, 14008, null], [14008, 16542, null], [16542, 19533, null], [19533, 22397, null], [22397, 22613, null], [22613, 24717, null], [24717, 26350, null], [26350, 28570, null], [28570, 30096, null], [30096, 31738, null], [31738, 34488, null], [34488, 37488, null], [37488, 38831, null], [38831, 40538, null], [40538, 40613, null], [40613, 43415, null], [43415, 46407, null], [46407, 48810, null], [48810, 49721, null], [49721, 51312, null], [51312, 52672, null], [52672, 55932, null], [55932, 57594, null], [57594, 59443, null], [59443, 60349, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 292, true], [292, 292, null], [292, 1468, null], [1468, 1468, null], [1468, 2446, null], [2446, 2446, null], [2446, 5111, null], [5111, 7594, null], [7594, 9164, null], [9164, 11530, null], [11530, 14008, null], [14008, 16542, null], [16542, 19533, null], [19533, 22397, null], [22397, 22613, null], [22613, 24717, null], [24717, 26350, null], [26350, 28570, null], [28570, 30096, null], [30096, 31738, null], [31738, 34488, null], [34488, 37488, null], [37488, 38831, null], [38831, 40538, null], [40538, 40613, null], [40613, 43415, null], [43415, 46407, null], [46407, 48810, null], [48810, 49721, null], [49721, 51312, null], [51312, 52672, null], [52672, 55932, null], [55932, 57594, null], [57594, 59443, null], [59443, 60349, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60349, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60349, null]], "pdf_page_numbers": [[0, 0, 1], [0, 292, 2], [292, 292, 3], [292, 1468, 4], [1468, 1468, 5], [1468, 2446, 6], [2446, 2446, 7], [2446, 5111, 8], [5111, 7594, 9], [7594, 9164, 10], [9164, 11530, 11], [11530, 14008, 12], [14008, 16542, 13], [16542, 19533, 14], [19533, 22397, 15], [22397, 22613, 16], [22613, 24717, 17], [24717, 26350, 18], [26350, 28570, 19], [28570, 30096, 20], [30096, 31738, 21], [31738, 34488, 22], [34488, 37488, 23], [37488, 38831, 24], [38831, 40538, 25], [40538, 40613, 26], [40613, 43415, 27], [43415, 46407, 28], [46407, 48810, 29], [48810, 49721, 30], [49721, 51312, 31], [51312, 52672, 32], [52672, 55932, 33], [55932, 57594, 34], [57594, 59443, 35], [59443, 60349, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60349, 0.04905]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
cfd3079b08fec19453f08baf9b0996357ff7122f
Assignment statements in procedural languages generally include the assignment of values to a limited class of expressions (such as subscripted arrays). It is the purpose of this paper to generalize the notion of assignment by proceeding along the lines of Schwartz' "Sinister Calls" [Schwartz 71]. The topics set forth below are motivation, technical details, and useful examples. The technical details include several abstract definitions. The useful examples include some surprises (like \texttt{let } (1 \leq j < n \mid A(j) \leq A(j+1)) expanding into a bubble sort. In most languages certain constructs select parts of a data structure but values cannot be assigned to these parts. For example, the APL assignment '(1 1 Q M) + E' should clearly mean 'assign the vector E to the diagonal of the matrix M'. Unfortunately APL permits only names and subscripted arrays to be assigned values. Any selection expression should be permitted because otherwise a design rule called 'programming generality' is violated: a construct should be permitted whenever it makes sense. The designers of Algol 60 defined the for-statement in the following way: "Step-until-element. A for element of the form A step B until C , where A, B, and C, are arithmetic expressions, gives rise to an execution which may be described most concisely in terms of additional Algol statements as follows: V := A; Ll: if (V-C) x sign(B) > 0 then go to Element exhausted; Statement S; V := V+B; go to Ll; where V is the controlled variable of the for clause and Element exhausted points to the evaluation according to the next element in the for list, or if the step-until-element is the last of the list, to the next statement in the program." [Naur et al. 60: 308] They had no idea at all that this definition had any flaws (so says Perlis). (What does 'for A[NEXT] + 1 step INC until 100 do;' mean where NEXT and INC are parameterless integer procedures? How many times are these procedures to be called? The intuitively consistent definition would call the procedures once each time the loop is to be executed.) Unfortunately, implementers took the definition literally. A verbatim implementation of the definition would require that the label 'Ll' not be used in certain places because the definition uses it. The macro expansion defined in this paper would treat such definitions in an intuitively consistent fashion. HOW? The technical details of various macro schemes are presented below. There will be many neologisms defined in this paper. As Dodgson once remarked, "... any writer of a book is fully authorized in attaching any meaning he likes to any word or phrase he intends to use". [Dodgson 97: 165] The meanings of these neologisms will appear below in logical order and in the glossary in alphabetic order. Blocks, statements, and expressions are all syntactic classes. A block is a sequence of statements each followed by a semicolon. In some of its uses it may be followed by an ending comprised of zero or more appropriate visual cues. Since the null string is not a statement, two semicolons in succession mark the end of a block. Statements include assignment statements, go to statements, macro definitions, conditional statements, and generally anything which is defined by a macro definition. Expressions are constants having a priori values, program variables which hold them, and anything which is defined by some definition of assignment. The meanings of VAR=CONSTANT and VAR=VARIABLE are given a priori. The naive macro scheme is defined in terms of the notions of similarity and simultaneous substitution. A parameter is a name which if related to the name of a syntactic class is assumed to be a member of that class (BLOCK, STMT,...); otherwise, it is assumed to be an expression. A correspondence is a mapping from parameters to expressions. Such a mapping can be applied to a phrase by replacing each parameter in the phrase by its corresponding expression. This is called simultaneous substitution. Two phrases are similar if, for some correspondence, they are identical under simultaneous substitution, applying the correspondence to each of them. This meaning is perhaps too general -- successive restrictions which are acceptable are: 1) no parameter occurs in both phrases, 2) one of the phrases has no parameters, 3) each parameter occurs exactly once in the other phrase. The uses of similarity in the various macro schemes below all comply with these restrictions. The processor for the naive scheme must recognize statements of the form 'macro BFORM; BLOCK ENDING;', remember them and process other statements by finding definitions for them in which BFORM and 'STATEMENT;' are similar blocks. The correspondence which makes them similar is applied to BLOCK and each statement of the resulting block is generated. Statements for which the macro processor has no definition are not processed by it any further. Each generated statement is processed in turn. The following statement should make naive a synonym of macro: macro naive BFORM; BLOCK ENDING;; macro BFORM; BLOCK ENDING; end naive It should be included in a test deck for any implementation. A statement is expandable if it has an a priori meaning or if each statement generated by its macro expansion is expandable. Let X be a program variable. Then an expression E is tried if the statement X=E is expandable. It is storable if E=X is expandable. A register is any expression which is both trievable and storable. The intent of a program is a collection of intended assertions about the program and assumptions about the data. An assertion is valid if it can be derived from these assumptions. A program is valid if each intended assertion is valid. The data is valid if it complies with the assumptions. A statement in a program is valid if it is expandable and the program is valid. The assertions posed but not intended may be called the extent of the program. (E.g. "this program requires at most..."). An optimizer will modify a program in such a fashion as to influence its extent without damaging its validity. Such a modification has no net effect -- if the original program is valid only when the modified version is valid. A statement in a valid program is superfluous if removing it has no net effect. Another statement is equivalent to it if replacing it with the other statement has no net effect. Two adjacent statements in a valid program commute if reversing their order has no net effect. Every program language has a domain of values $V$, which are indestructible, a family of mappings from $V^n$ into $V$, and a countable set of program variables, each of which may assume any value in the domain. Given program variables $X_1, ..., X_n$ and a mapping $f: V^n \rightarrow V$, then $f(X_1, ..., X_n)$ is a function which may or may not have a simple representation in the programming language. A function $f(X_1, ..., X_n)$ is safe between points $L_1$ and $L_2$ in a program if the assertion is valid that $(\forall t)(f(X_1, ..., X_n) = t \text{ at } L_1) \supset (f(X_1, ..., X_n) = t \text{ at } L_2)$). It is safe over any phrase which has one entry point and one exit point if it is safe between those points. A constant is a function which is safe between any two points in the program. A function is a subfunction of another if it is safe whenever the other is safe. A trievable expression $E$ is conformable to a storable expression $S$ whenever $S=E$ is a valid statement. Let $X$ and $Y$ be program variables, and let $Q$ be a register. $X$ conforms to $Q$ whenever $X=Q$ is superfluous following a valid statement $Q=X$. Whenever $Y=Q$ is superfluous following \( Q = X \) and, for some function \( f(X_1, \ldots, X_n) \), the assertion that \( Y = f(X_1, \ldots, X_n) \) is valid following \( (Q = X; Y = Q;) \), then \( Q \) is retrievable, and \( f(X_1, \ldots, X_n) \) is its transfer function. (\( X \) may or may not be among \( X_1, \ldots, X_n \).) A register \( Q \) is restorable whenever the statement \( Q = X \) is superfluous following a valid statement \( X = Q \). Whenever \( Q \) is also retrievable, it is a field. Consider the following trieval and storage operations: \[ \begin{align*} \text{macro } & \quad X = \text{value}(Y); \quad X = Y; \quad \text{end } = \text{value}; \\ \text{macro } & \quad \text{value}(Y) = X; \quad \text{end value}= \end{align*} \] A trieval expression \( E \) is a defined function whenever \( \text{value}(E) \) is a field. In assertions, references to the transfer function \( f(X_1, \ldots, X_n) \) of \( \text{value}(E) \) may be abbreviated as \( E \) or as \( X_{i_1} \ldots X_{i_k} \leftarrow E \) should \( X_{i_1} \ldots X_{i_k} \) not be explicitly named in \( E \). If \( \text{value}(E) \) is not a field, then \( E \) is said to have side effects. Two functions are isomorphic if each is a subfunction of the other. Theorem: Isomorphism is an equivalence relation. The data space of a function is its equivalence class under isomorphism. Any property of functions which is preserved under isomorphism applies equally well to data spaces. Data spaces may be safe, constant, equal (isomorphic), subspaces (subfunctions), etc. If every common subspace of two data spaces is a subspace of some particular common subspace, then this particular subspace is their intersection, (which is uniquely determined). The union of two data spaces has a similar meaning. Theorem: If \( \langle X, Y \rangle \) is the pairing function (CONS in LISP) and \( F, G \) are functions, then the union of their data spaces is the data space of \( \langle F, G \rangle \). Theorem: The closure of the set of all defined functions of a valid program under union and intersection is a lattice over \( \mathfrak{C} \), the subspace relation. The superior node of this lattice is the data base of the program and the inferior node is its constant space. Two data spaces are independent if their intersection is the constant space. Otherwise they overlap. We write \( D_1 \mid \ldots \mid D_n \) to mean that \( D_1, \ldots, D_n \) are mutually independent data spaces. A block is restricted to a data space if every independent data space is safe over the block. A data space is live at some point in a program if putting some block restricted to that data space at that point would have a net effect. It is dead otherwise. A data space is marred by a block if it is not safe over the block. If every subspace of a data space is either marred by a block or dead on entry to the block, then that data space is mashed by the block. Point L2 in a program cannot be reached from point L1 if (True at L1 ⊃ False at L2) is valid. Theorem: A data base is dead at some point in a program if it is mashed before any defined function is retrieved for which the data base is a subspace of the given data space. These observations may be amusing: (1) A field F may be made safe over a block by using the macro ``` macro | save F across BLOCK;; | TEMP TEMP=F; BLOCK F=TEMP; end save; ``` (2) Functions inherit properties of their data spaces; fields are functions; (3) The constant space is forever dead; (4) The program variables are mutually independent fields (5) A field may be used as a temp in a block if it is dead before and dead after the block (6) An optimizer or a garbage collector may deallocate a dead field unless it is a constant (never throw nil away!) Every time a garbage collector is invoked, it must be restricted to some data space which is dead at the point of invocation. neomacro definitions Naive macro definitions have a very simple interpretation but a very complicated unintuitive behavior. Not only are some expressions evaluated many times, but local names may interfere with the interpretation in some common cases. We will first modify the naive expansion scheme to include provision for local variables, then we will invent a new scheme of protected definitions which provides for global variables, frozen variables and shorter expansions as well. These definitions can be compiled like procedures or expanded in line. Later we will define two more schemes, the symmetric definition (which is just an abbreviation) and the expression definition (which connects function definition to statement definition). All these inventions will be defined in terms of naive macros. In order to provide for the introduction of unique names to an expansion, we make a block similar to a form (FORM) also similar to \( \vdash \text{FORM} \vdash \text{VARS} \) where VARS is a list of names not occurring in FORM. We make these names correspond to variable names unique to that expansion: names guaranteed not to occur elsewhere in the program. Consider this definition, call, and possible expansion: ```plaintext macro \( \vdash \) do S; \( \vdash \) L0; L0: S; go to L0;; do do L0 + 1; L0607: L0608: L0 + 1; go to L0608; go to L0607; ``` Indeed the names correspond in an intuitive way. This device is analogous to the local statement of IMP [Irons 70:33]. In fact, the local statement could be defined by: ```plaintext macro \( \vdash \) local VARS in BLOCK; \( \vdash \) DoIt; macro \( \vdash \) DoIt; \( \vdash \) VARS; BLOCK; end DoIt; DoIt; end local; ``` An expression \( f(e_1, \ldots, e_n) \) is similar to a form \( f(x_1, \ldots, x_n) \) when each subexpression \( e_j \) corresponds to a variable \( x_j \) of the form. Local variables \( y_1, \ldots, y_m \) and global variables \( z_1, \ldots, z_k \) are augmented to the form by writing \( z_1 \ldots z_k \vdash f(x_1, \ldots, x_n) \vdash y_1 \ldots y_m \) and making these automatic correspondences: - \( x_j : e_j \) as it was \( j = 1, \ldots, n \) - \( z_j : z_j \) global parameters made explicit \( j=1, \ldots, k \) - \( y_j : y_j^{\text{exp}} \) local names become unique to an expansion. The naive expansion of the definition ```plaintext macro \( z_1 \ldots z_k \vdash f(x_1, \ldots, x_n) \vdash y_1 \ldots y_m ; \text{body}; ``` may take place as though all the parameters were stated explicitly. The global parameters \( z_1, \ldots, z_k \) are global in the dynamic sense: they are those variables in use at the call (as in APL). Any variables of the body which are not in the form are global in static sense: those in use at time of compilation of the definition (as in Algol 60). These globals will have somewhat more use in terms of protected definitions. A macro definition of the new kind is called a protected definition. It has two forms, implicit and explicit (the former being more common), both defined as follows: ```plaintext macro def STMT; BLOCK end CUES;; def STMT; Xi_1 \ldots Xi_k \leftarrow BLOCK \rightarrow Xf_1 \ldots Xf_m end CUES; (\text{where } Xi_1, \ldots, Xi_k \text{ are live before BLOCK and } Xf_1, \ldots, Xf_m \text{ are marred by BLOCK}) end def STMT; macro \leftarrow STMT; Xi_1 \ldots Xi_k \leftarrow BLOCK \rightarrow Xf_1 \ldots Xf_m; \rightarrow T_1 \ldots T_n; macro \rightarrow STMT; \leftarrow T_1 \ldots T_n;; (\forall Xi_j \text{ unless suppressed}) Ti_j = Xi_j; call or expand TBLOCK; (\forall Xf_j \text{ unless suppressed}) Xi_j = Ti_j; end STMT; (\text{where } BLOCK \text{ is similar to } TBLOCK \text{ wherein the variables } X_1, \ldots, X_n \text{ of } STMT \text{ correspond to the distinct variables } T_1, \ldots, T_n. \text{ If both } Ti_j = Xi_j \text{ and } Xi_j = Ti_j \text{ are generated, then those assignments are suppressed which would cause subexpressions of } Xi_j \text{ to be retrieved twice or assigned twice}) end def \leftarrow \rightarrow;; ``` Certain initializations and finalizations are suppressed whether or not they have a net effect on the program. These suppressions are necessary in order that no definition be expanded more often than is intuitively reasonable. Perhaps this definition can be worked out in a cleaner fashion so that no statements need be suppressed. At any rate, the action is: initialize some parameters, call the routine or expand the definition, and finalize some parameters. This works for definitions of \( = \) as well as for many other statement forms. I assume that the explicit definition initializes and finalizes in the orders given explicitly; but that the implicit definition carefully defaults these orders to preserve their order in \( STMT \) (e.g. \texttt{def A(J)=X; X J A \leftarrow \ldots \leftarrow A;}). The following definitions may clarify the use of the protection scheme. Then we will prove some theorems about it. ``` def R0= <R1,R2>; R1 R2 call LISP.CONS; | R0; def R1=hd R0; R0 call LISP.CAR; | R1; def R2=tl R0; R0 call LISP.CDR; | R2; ``` which are necessarily primitive. And the storage definitions: ``` def <A,B>=C; A = hd C; B = tl C ;; def hd C = A; C = <A, tl C > ;; def tl C = B; C = <hd C, B > ;; ``` are not. In the primitive case the variables are the names of fixed locations (like machine registers) and the system is expected to use them. Of course, such a variable can be made into a temporary by saving it in another temporary and restoring it later. Good optimization can make statements like ``` <x,Y>=<Y,X> boil down to T=X; X=Y; Y=T; but the optimizer must know the trivial identities involving CONS, CDR, and CAR (following CONS, both CAR and CDR are superfluous). ``` Assignment itself can be defined: ``` def A + B; A = + B ;; def A * B; B * A ;; def A ↔ B; <A,B> ↔ <B,A> ; ``` each of which does the intuitively correct action. **Composability theorem:** A statement \( f(e_1, \ldots, e_n) \) involving subexpressions \( e_1, \ldots, e_n \) is expandable using the protected definition ``` def f(x_1, \ldots, x_n); INITIAL | g(x_1, \ldots, x_n) | FINAL; ``` provided that: 1. \( \text{INITIAL} \subseteq \{ x_j \mid e_j \text{ is trievable} \} \) 2. \( \text{FINAL} \subseteq \{ x_j \mid e_j \text{ is storable} \} \) 3. \( g(T_1, \ldots, T_n) \) is expandable with \( T_1, \ldots, T_n \) being program variables. **Proof:** the generated naive macro definition is: macro \( f(x_1, \ldots, x_n) \rightarrow T_1 \ldots T_n \) \( (\forall x_j \in \text{INITIAL unless suppressed}) \ T_j = x_j; \) \( g(T_1, \ldots, T_n) \) \( (\forall x_j \in \text{FINAL unless suppressed}) \ x_j = T_j; \) \quad \text{end} \ f; \) If \( T_j = e_j \) is generated in the second line, then \( x_j \in \text{INITIAL}, \) and \( e_j \) is trievable (by hypothesis 1), hence \( T_j = e_j \) is expandable. Likewise, if \( e_j = T_j \) is generated in the fourth line, then \( x_j \in \text{FINAL}, \) and \( e_j \) is storable (by hypothesis 2), hence \( e_j = T_j \) is expandable. The third line is expandable by hypothesis, hence \( f(e_1, \ldots, e_n) \) is expandable according to the definition of the term. In point of fact, the definition of assignment is ambiguous. Consider the two definitions and assignment: \[ \begin{align*} \text{def} & \quad y = f(x_1, \ldots, x_n); \text{ call } f;; \\ \text{def} & \quad g(z_1, \ldots, z_m) = w; \text{ call } g;; \\ g & \quad (d_1, \ldots, d_m) = f(e_1, \ldots, e_m); \\ \end{align*} \] which we assume to be expandable. Which definition is expanded first? Both orders are shown below: \[ \begin{align*} \text{/* g first */} & \quad \text{/* f first */} \\ (\forall i \ldots) & \quad z_i = d_i; \\ /* w = f(e_1, \ldots, e_n) */ & \quad (\forall j \ldots) \ x_j = e_j; \\ (\forall j \ldots) & \quad x_j = e_j; \quad \text{call } f; \\ \text{call } f; & \quad /* g(d_1, \ldots, d_m) = y */ \\ (\forall i \ldots) & \quad z_i = d_i; \\ & \quad w = y; \\ (\forall j \ldots) & \quad e_j = x_j; \quad \text{call } g; \\ \text{call } g; & \quad (\forall i \ldots) \ d_i = z_i; \\ (\forall i \ldots) & \quad d_i = z_i; \\ (\forall j \ldots) & \quad e_j = x_j; \\ \end{align*} \] assuming that \( y \) is not initialized by \( f \) and \( w \) is not finalized by \( g. \) The expansions are almost alike: only initializations of left-hand variables and finalizations of right-hand variables are out of place. Stated as a theorem, this observation becomes: Theorem: If no protected definition used in the expansion of a statement initializes left-hand variables nor finalizes right-hand variables, then the order of expanding them is immaterial. [Proof omitted]. But the hypotheses of the theorem are rather commonly violated (cf. the definition of \( \text{hd } C = A \)). Which order is to be preferred? Initializations are frequently commutable because they rarely involve side effects. The finalizations might be made in left to right order (so \( f \) would be expanded first). If the choice depends only upon the sequential order of the definitions, then further analysis becomes cumbersome: (what does \( <A,B>=(A+<3,4>) \) mean when \( + \) is defined by \[ \text{def } X=(Y+Z); \text{ } X=Z; \text{ } Y=Z; ; \] Will \( A \) be 3 or \( <3,4> \)? If \( g \) is expanded first then initializations are made from left to right and finalizations from right to left, completing the evaluation of \( (X+<3,4>) \) before assigning \( X=3 \). This might prove to have more intuitive appeal. If within a definition, the finalizations are made from left to right, then \( <X,X,X>=<1,2,3> \) is equivalent to \( X=3 \). This achieves the intuitive rule of evaluating subexpressions completely before finalizing cognate expressions. Then \( J=J+(J*3)+J \) is equivalent to \( J=J+6 \). Furthermore, \( <\text{sign}(X), \text{abs}(X)>=<-1,12> \) is equivalent to \( X=-12 \) unless \( X=0 \). Many wonderful theorems about the preservation of properties when fields are independent lurk in dark corners waiting to be discovered. (1:30 A.M.) In many cases the storage and trieval definitions of an expression are remarkably alike. Two abbreviations which exploit this symmetry are: (1) \[\text{macro} \ sym \ def \ FORM_1=FORM_2; \ STMT_1;\] \[\text{def} \ FORM_1=FORM_2; \ STMT_1; \ \text{end \ trieval};\] \[\text{def} \ FORM_2=FORM_1; \ \text{rev} \ STMT_2; \ \text{end \ storage}; \ \text{end \ sym \ def};\] where \[\text{macro} \ rev \ EXPR_1=EXPR_2;; \ EXPR_2=EXPR_1; \ \text{end \ rev};\] \[\text{macro} \ rev \ \text{if} \ COND \ \text{then} \ STMT_1 \ \text{else} \ STMT_2;;\] \[\text{if} \ COND \ \text{then} \ \text{rev} \ STMT_1 \ \text{else} \ \text{rev} \ STMT_2; \ \text{end \ rev \ if};\] \[\text{macro} \ rev \ (\forall \ COND) \ STMT_1;; \ (\forall \ COND) \ \text{rev} \ STMT_1; \ \text{end \ rev \ \forall};\] (2) \[\text{macro} \ | \ \text{defx} \ FORM = EXPR; \ \downarrow \ VAR;\] \[\text{sym} \ \text{def} \ VAR = FORM; \ VAR = EXPR; \ \text{end \ defx};\] \[\text{macro} \ | \ \text{defv} \ FORM = EXPR; \ | \ VAR;\] \[\text{def} \ VAR = FORM; \ VAR = EXPR; \ \text{end} \ (\FORM) =;\] Now, conditional expressions are defined by: \[\text{macro} \ VAR = (\text{if} \ COND \ \text{then} \ EXPR_1 \ \text{else} \ EXPR_2);\] \[\text{sym} \ \text{def} \ \text{if} \ COND \ \text{then} \ VAR = EXPR_1 \ \text{else} \ VAR = EXPR_2; \ \text{end (if)};\] And these examples bear some interest: \[\text{sym} \ \text{def} \ \text{stk} \ X = Y; \ X = <Y,X>;\] \[\text{sym} \ \text{def} \ \text{nee} \ Y = Z; \ <X,Y> = <Z,X>;\] \[\text{defx} \ A \ \text{max} \ B = (\text{if} \ A \ < B \ \text{then} \ B \ \text{else} \ A);\] \[\text{defx} \ \text{parts} = <\text{RANK},\text{RHO},\text{DEL},\text{ABASE},\text{VBASE}>;\] \[\text{defx} \ \text{tasks} = \text{pq} \ \text{JOBFILE};\] \[\text{defx} \ A[J] = (\text{if} \ \text{pair}(J) \ \text{then} \ <A[\text{hd} \ J], \ A[\text{tl} \ J]> \ \text{else} \ A(J));\] The \text{sym} \ \text{def} \ statement \ form \ is \ just \ an \ abbreviation, \ but \ the \ \text{defx} \ statement \ form \ is \ the \ key \ definition \ which \ permits \ expression \ macros. \ Of \ course, \ it \ depends \ heavily \ on \ the \ notion \ of \ assignment. A register is a \text{file} if every value stored in it may be retrieved once. If all values stored in a file have been retrieved, the file is empty and it should not be tried until more values are entered. For each file type, there should be a field which defines emptiness of the file. For \text{stk} we might write: \[\text{defv} \ \text{stk} \ X \ \text{be empty} = \neg \ \text{pair}(X);\] \[\text{def} \ \text{stk} \ X \ \text{be empty} = b; \ \text{if} \ b \ \text{then} \ X=0 \ \text{else} \ \text{if} \ \neg \ \text{pair}(X) \ \text{then error};;\] Then stk X may be cleared by writing 'let stk X be empty'. If the file is not supposed to be empty at some point, then 'let ¬ (stk X be empty)' generates an error if it is. This is equivalent to 'if stk X be empty then error' which verifies that the file is not empty. Remark: the occurrence of 'stk X' in 'stk X be empty' generates neither storage nor retrieval expansions for 'stk X'. The census conditions on a file are easily described by defining a file operator (v) which counts all values entered and retrieved. The second definition is the well-formedness condition for a census (the first gives meaning to assertions): ```setl def assert b; if ¬ b then error; def v C \geq 0 = (\forall y \in \text{range}(C) \mid y \geq 0); def X = F \lor C; X=F; C(X)=C(X)+1; assert C \geq 0; def F \lor C = X; F=X; C(X)=C(X)-1; assert C \geq 0; def b = F \lor C be empty; b=F be empty; assert b=(\forall y \in \text{range}(C) \mid y=0); def F \lor C be empty = b; F be empty = b; if b then C = nl else assert (\exists y \in \text{range}(C) \mid y>0); ``` With these definitions, if FILE is a file and CENSUS is a temp then FILE \lor CENSUS is a file which may be used in place of FILE and which incorporates the requirements of files. A quantity F is a file if and only if F \lor C may replace every occurrence of F with no net effect on the program, C being a temp. A priority queue is a file which always yields its smallest entry. The following definitions make pq A a priority queue. ```setl def pq A be empty = b; if b then A=nl else if A=nl then error;; def v pq A be empty = (A=nl);; def \text{let} pq A=X \leftarrow b,j,k; b=true; j=\#A+1; A(j)=X; (while b \land (j>1) doing j=k) k = j/2; \text{let}(A(k) \geq A(j)) nee \neg b;; end def; def \text{let} x=pq A \leftarrow b,j,k;A(\#A) nee (A(1) nee X) = \Omega; j=1; b=true; (while b \land j<\#A/2 doing j=k) k=2*j; if k<\#A then if A(k+1) \leq A(k) then k=k+1; let (A(j) \leq A(k)) nee \neg b;; end def; ``` The **nee** operator was defined earlier. It permits a register to be saved before it is assigned (X **nee** OLDX=NEWX;) A generalized deque can be built rather easily if the symmetric sum operator is defined on atoms (or any other associative and commutative operator for which \( A \oplus A = \Omega \) and \( A \oplus \Omega = A \); \( \Omega \) is most convenient but any particular value can be substituted; if \( A \neq B \) then the exact value \( A \oplus B \) is not important.) Genuinely symmetric lists will be defined, and stack operations will be permitted on either end. **LINK** and **VAL** are two SETL functions. ```plaintext def A to B; LINK(A) = LINK(A) \oplus B; LINK(B) = LINK(B) \oplus A; end merge/break; def dq A be empty = (LINK(A)=\Omega); def dq A be empty =b; if b then A=newat else if LINK(A)=\Omega then error;; def \( X= dq A \leftarrow T; X=VAL(A); A nee T=LINK(A); A to T; end pop; def \( A = X \rightarrow T; A nee T=newat; VAL(A)=X; A to T; end push; ``` Then a function walker can look like: ```plaintext let (dq OUT be empty) \& (dq NEXT be empty); dq NEXT=TOP; (while \( \neg \) dq NEXT be empty) begin TEMP=dq NEXT; dq OUT=TEMP; if atom(TEMP) then continue while; let GEN be empty; LEFT = GEN; (\( \forall \langle x,y \rangle \in TEMP \)) dq GEN = y; GEN to NEXT; TEMP=dq LEFT; NEXT=LEFT; end while; ``` The deque in GEN was built up and flipped around with no effort. Several surprises have turned up. It was thought that only a few expression types could reasonably be defined as fields. This is not the case. Storage definitions have been found which make fields out of many constructs in SETL and APL. The first and foremost is a boolean operation, membership: \[ \text{def } \text{xs}=b; \quad \text{if } b \text{ then } S = S \text{ with } x \\ \quad \text{else } S = S \text{ less } x; \] Assigning a truth value to a predicate in this case causes the predicate to assume that truth value. (A bit is any predicate which is a field to which true and false both conform.) We can write 'let \( \neg (3cA) \)' by declaring: \[ \text{def let } b; \quad b=\text{true};; \\ \text{def } \neg b=z; \quad b = \neg z;; \] In principle, A can be viewed as a bit vector and the statement becomes \( A_3 = \text{false} \). In practice, however, such viewpoints are ignored. A statement like 'let PRED' means "make PRED become true -- I care not how". An alternative definition would have changed \( x \) instead: \[ \text{def } \text{xs}=b; \quad \text{if } b \text{ then } x = \min(S) \\ \quad \text{else } x = \max(S)+1; \] which is certainly a field when \( S \) is a set of integers. The more general definition is usually to be preferred. Yet another definition makes \( \text{xs} \) a field (because \( x = \uparrow S \) is superfluous after \( S = \uparrow S \)). \[ \text{def } \text{xs}=b; \quad \text{if } b \text{ then } x = \uparrow S \text{ else } x = \text{newat} \] where \( \uparrow S \) is a random element of \( S \) and newat is always a value distinct from all values previously generated. Whenever the value of a field must be changed, the storage operation may make random changes. The definition of flip aids in describing this phenomenon: \[ \text{def } b=\text{flip}; \quad b=\text{even}(\text{SEED}); \quad \text{SEED}=\text{MODULUS}\mid \text{RC+RA*SEED};; \] which is a random condition. Assume that \( T=\text{flip} \) is superfluous. whenever T is dead (i.e. SEED is not a variable of contention in the intent of the program). The definition of either permits a random choice between two variables: \[ \text{defx (either A or B) + (if flip then A else B);} \] Then the logical connectives become: \[ \text{def a \& b=c; if c then <a,b> = <true,true>} \] \[ \text{else if a \& b then (either a or b)=false;} \] \[ \text{defx a V b = (\neg a) \& (\neg b);} \] \[ \text{defx a \triangleright b = (\neg a) \lor b);} \] \[ \text{defx a \neq b = (a \& \neg b) \lor (b \& \neg a);} \] (proof, anyone?) The set theoretic operations can be defined as fields: \[ \text{def A \int B = Z; (\forall x \in A \& x \in B) x \in B = x \in Z;} \] \[ \text{def A \cup B = Z; (\forall x \in A & x \in B) x \in B = x \in Z;} \] \[ \text{def A \- B = Z; (\forall x \in A \& x \in B) x \in B = \neg x \in Z;} \] \[ \text{def A \subseteq B = Z; (\forall x \in A \& x \in B) x \in A \& x \in B = x \in Z;} \] \[ \text{def A \oplus B = Z; (\forall x \in A \& x \in B) x \in A \& x \in B \neq x \in Z;} \] \[ \text{def A \notin B = Z;} \quad (\forall x \in A \& x \in B) x \in B = x \in Z; \quad \text{end sym diff;} \] And they might be used in: \[ \text{let (x \in A \int B) \& \neg ((y \in A \cup C) \lor b);} \] (deterministic) \[ \text{let (x \in A \int B) \supset ((y \in A \cup C) \lor b);} \] (random changes) The storage definition of quantified expressions leads to some intriguing results. Let \( \forall \text{COND} \) be any phrase like \( \forall x \in S \) or \( 1 \leq v < #A \); and let \( \exists \text{COND} \) correspondingly be like \( \exists x \in S \) or \( 1 \leq \exists j < #A \). The storage definition for universal quantification may then be: \[ \text{macro (\forall \text{COND}\mid \text{PRED})=b;;} \] \[ \text{if b then (while \exists \text{COND}\mid \neg \text{PRED}) let PRED} \] \[ \text{else error; end \forall;} \] \[ \text{macro (\exists \text{COND}\mid \text{PRED}) = b;; \neg (\forall \text{COND}\mid \neg \text{PRED}) = b;;} \] Naively found, a violation of PRED is corrected, then the search starts over. The transitive closure of a set S under a function can be defined: \[ \text{macro \mid C= f closure S; \neg x; C=S; let (\forall x \in S \mid f(x) \in S); end closure;} \] A sequence A can be sorted by simply demanding: \[ \text{let } (1 \leq Vj \leq S \mid S(j) \leq S(j+1)); \] if the earlier definition of \(\leq\) is accepted: \[ \text{def } X \leq Y = \text{if } b \ \text{ then } X \leq Y; \text{ end } \leq; \] Setting this "sorted" bit generates the bubble sort but a clever enough optimizer would convert that to a radix sort. The tree sort (or heap sort) can be given in a few lines: \[ (1 < Vm \leq n) \text{ let } (1 < Vj \leq m) | A(j) < A(j+2); \] \[ (n > Vm > 1) \text{ let } A(1) < A(m) \wedge (1 < Vj \leq m-2) \mid (A(2*j) \leq A(m|2*j+1)) < A(j)); \] but many superfluous tests are made. Maximum and minimum are defined by: \[ \text{def } X \cap Y = (\text{if } X < Y \text{ then } Y \text{ else } X); \] \[ \text{def } X \uparrow Y = (\text{if } X < Y \text{ then } X \text{ else } Y); \] Remark: \(X \cap Y = Z\) has transfer function \(Z \cap (X \uparrow Y)\). Various arithmetic expressions can be fields. They are tabulated: <table> <thead> <tr> <th>definition</th> <th>transfer function (on reals)</th> </tr> </thead> <tbody> <tr> <td>def (\sqrt{R}) = (R^{2}); (R=R^{2*2});</td> <td>(\text{abs}(Z))</td> </tr> <tr> <td>def (\text{sign}(R)) = (R=\text{sign}(Z)\times\text{abs}(R););</td> <td>(if (R=0) then 0 else (\text{sign}(Z))</td> </tr> <tr> <td>def (\text{abs}(R)) = (Z; R=\text{sign}(R)\times\text{abs}(Z););</td> <td>(if (R=0) then 0 else (\text{abs}(Z)))</td> </tr> <tr> <td>def (\text{floor}(R)) = (Z; R=\text{floor}(Z)+\text{fract}(R););</td> <td>(\text{floor}(Z))</td> </tr> <tr> <td>def (\text{fract}(R)) = (Z; R=\text{floor}(R)+\text{fract}(Z););</td> <td>(\text{fract}(Z))</td> </tr> <tr> <td>def (M \mod N) = (Z; M=M-(M \mod N)+(Z \mod N););</td> <td>(Z \mod N)</td> </tr> <tr> <td>def (M \mod N) = (Z; M=M-(M \mod N)+Z;);</td> <td>(Z \mod N) but (M \div N \times M \mod N)</td> </tr> <tr> <td>def (Z \mod N) = (Z; M=Z; M=M \mod N;);</td> <td>(Z \div 0), otherwise?</td> </tr> <tr> <td>def (\text{even}(M)) = (b; M \mod 2=(\text{if } b \text{ then } 0) else 1); (b) restricted to true, false</td> <td>(Z \times P)</td> </tr> <tr> <td>def (V \uparrow Z; \ V[Z] = Z[V\uparrow V];); (\uparrow) and (\downarrow) are the grade up and grade down operations of APL.</td> <td>(Z \times V)</td> </tr> <tr> <td>def (V \downarrow W = N; \ W = V \downarrow W;); (\downarrow) and (\uparrow) are encode and decode of APL</td> <td>(x/V)</td> </tr> <tr> <td>def (v (i,j) = j+(i*(i+1)/2););</td> <td>(i,j,k) restricted to nonnegative integers</td> </tr> </tbody> </table> Dualities Some operations which are complementary can be defined as fields. The similarity-substitution package is a good example: \[ \text{def } D = \text{Pl} \sim \text{P2}; \text{ if Pl and P2 are similar then } D=\text{their correspondence} \text{ else } D=false; \text{ end } \sim; \\ \text{def } \text{Pl} \sim \text{P2}=D; \text{ if } D \neq false \text{ then } \text{Pl} = \text{application of D to P2} \text{ else } P2 = \Omega; \text{ end } \sim; \\ \] Given the meanings of similarity, correspondence, and substitution defined on page 3, then \( \text{Pl} \sim \text{P2} \) is a field. If \( \text{Pl} \Rightarrow \text{P2} \) is a rule in some transformation (like the macro processor), and a third pattern \( F \) is similar to \( \text{Pl} \) then \( F \sim \text{P2} = F \sim \text{Pl} \) will cause \( F \) to assume its transformed value. Try, for example: \( F=(x(y-q)+xq)-xz \), \( \text{Pl}=(a-b)+b \), and \( P2=a \). After \( F \sim \text{P2}=F \sim \text{Pl} \), then \( F=x(y-z) \). Other complementary operations which demand scrutiny are: 1. parse-print, really just another similarity-substitution scheme; 2. request-return, for various allocation schemes. 3. suspend-resume, the primitives of control, 4. swap in-swap out, (page in-page out), for use in operating systems 5. input-output, especially using coroutine control 6. ying-yang, consider all opposing actions. Flaws with this approach (to give fair warning) include the limitations on macros (no decisions during expansion) and the copying of too many marred data spaces (explicitly if not implicitly). One may want to test whether a parameter is storable before initializing it and one should not have to state explicitly what happens to fields which are not to be changed (see \texttt{hd}, \texttt{tl}, \texttt{floor}, \texttt{fract}, \texttt{sign}, ... and try \texttt{defx last x \ast (if pair(X) then \texttt{tl} X else X)} or try defining \texttt{Q} in APL so \((1 1 \& M)+E \) works). Final disclaimer: I make no claim that any of the SETL-like statements are legal SETL statements. I have assumed that the reader is familiar with SETL, APL, Algol, PL/1, LISP and Algebra. GLOSSARY assertion - a property of a program which is in question assumption - properties which the data for a program is assumed to have bit - any predicate which is a field. True and false conform to it. block - a sequence of statements each followed by a semicolon commute - two adjacent valid statements commute if reversing their order has no net effect conformable - a trievable expression E is conformable to a storable expression S whenever S=E is a valid statement conforms - a value which may be stored into a register and retrieved intact conforms to it constant - an expression which represents a particular value constant space - the data space of all constant functions; it is a subspace of any other data space contains - every function contains its subfunctions correspondence - a mapping from parameters to phrases data base - that (smallest) data space of which each data space of a defined function is a subspace data space - the equivalence class of a function under isomorphism dead - a data space which is not live defined function - a function for which value(function) is a field, where value() is defined by: macro X=value(Y);; X=Y;; macro value(X)=Y;;; equivalent - two statements are equivalent if replacing a valid occurrence of one by the other has no net effect expandable - a statement is expandable if either it is primitive or every statement generated in processing it is expandable expression - any phrase at a level lower than statements extent of a program - assertions posed but not intended field - a retrievable register which is restorable defile - a register which can be assigned a series of conformable values and later spew them out (subject to a transfer function) finalization - assigning parameters their computed values after a definition function - a mapping in terms of program variables generated statement - each statement produced by the expansion of a macro definition independent - data spaces $D_1, \ldots, D_n$ are mutually independent \[(D_1|\ldots|D_n) \quad \text{if } D_i \text{ overlaps } D_j \text{ implies } i=j\] initialization - evaluation of parameters on entry to a protected definition intent of a program - some arbitrary collection of assertions about the program and assumptions about the data isomorphic functions - functions which are subfunctions of each other live - a data space is live if marring it would have a net effect macro - any scheme which permits the definition of abbreviated statement forms; the naive (or holy) macro scheme in particular marred - not safe mashed by a block - a data space for which every subspace is either marred by the block or dead on entry to it. naive macro definition - a macro scheme which depends only on similarity and substitution with very few bells and whistles net effect - a property of modifications to a valid program. The modified version is valid if and only if the modification has no net effect. overlap - two data spaces overlap if some common subspace is live parameter - a quantified name in a naive macro definition which is used as a substitution point pentachotomy law - For any two data spaces A and B, exactly one of the following relations properly holds: 1. $A=B$, some data space 2. $A\subseteq B$, A is a subspace of B, properly if $A \neq B$ 3. $A\supseteq B$, A contains subspace B, properly if $A \neq B$ 4. $A \neq B$, A and B are independent, properly if neither $A \subseteq B$ nor $B \subseteq A$. 5. $A \neq B$, A and B overlap, properly if neither $A \subseteq B$ nor $B \subseteq A$. phrase - any syntactically well formed sequence of names and symbols in a program program variables - a countable set of names each of which has an associated value at any particular time. The primitive statements VAR=CONSTANT and VAR=VARIABLE are assumed to replace this value with another. predicate - a boolean function protected definition - a macro scheme in which the parameters are treated as program variables which may be initialized before entering the definition, and finalized afterward reached - point L2 can be reached from point L1 if (true at L1 ⇒ false at L2) is not valid. register - an expression which is both retrievable and storable. It may have strings attached. restorable expression - a register Q for which Q=X is superfluous following a valid statement X=Q. restricted - a block is restricted to a data space if every independent data space is safe over that block retrievable expression - a register Q for which a superfluous trieval Y=Q may follow Q=X, in which case there must be some function such that (Y=function after Q=X; Y=Q;) is a valid assertion safe - a function is safe between two points L1 and L2 whenever (∀t)(function=t at L1) ⇒ (function=t at L2)) is a valid assertion side effects - if the trieval of an expression cannot be superfluous, then the expression has side effects. I.e. value(expression) is not a field similar - two phrases are similar if some correspondence can be applied to both to make them equal simultaneous substitution (application of a correspondence) - the scheme of replacing each occurrence of a parameter in a phrase with its corresponding phrase statement - a primitive form with an a priori definition, or all but the final semicolon of a block which is similar to the first block form of some macro definition. Three examples: 1) X=3 2) macro rev (∀COND) STMT;; (∀COND) rev STMT; end rev 3) rev (∀x ∈ Dom A W(x)) | A(x) = B(x) storable expression - an expression E for which the statement E=X is expandable (X a program variable). subfield - the subfunction relation applied to fields. All fields are functions. subspace - the subfunction relation extended to data spaces. subfunction of a function - any function which is safe whenever the given function is safe. superfluous - a valid statement is superfluous if removing it has no net effect. A statement is superfluous at a given point in a program if inserting it at that point has no net effect. temp - a program variable (or field) which is dead before and dead after a given block transfer function of a retrievable expression - that function determined by the meaning of 'retrievable' trievable expression - an expression E for which the statement X=E is expandable vacuous - a field is vacuous whenever a store into it is superfluous. E.g. value(field) is always vacuous. valid assertion - an assertion which can be derived from assumptions in the intent of a program. " data - data which complies with the intended assumptions " program - a program for which all intended assumptions are valid " statement - an occurrence of a statement in a valid program. value - any member of the domain of indestructible manipulable objects of a program; the 'value' operator is defined by: macro value(E) = X;; end no-op; macro X = value(E); X=E;; end identity; Approximate syntax (simple repetition denoted by (...)*) \[ \begin{align*} \text{BLOCK} & : = ( \text{STATEMENT};)^* \\ \text{STATEMENT} & : = \text{macro BFORM; BLOCK ENDING} \\ & \quad | \text{EXPR=EXPR} \\ \text{BFORM} & : = \text{BLOCK} \\ & \quad | (\text{NAME})^* \mid \text{BFORM} \mid (\text{NAME})^* \\ \text{ENDING} & : = \\ & \quad | \text{end(SYMBOL)}^* \\ \text{STATEMENT} & : = \text{def SFORM; BFORM ENDING} \\ & \quad | \text{sym def VAR=EXPR; EXPR=EXPR ENDING} \\ & \quad | \text{defx EXPR \rightarrow EXPR} \\ \text{SFORM} & : = \text{STATEMENT} \\ & \quad | (\text{NAME})^* \mid \text{SFORM} \mid (\text{NAME})^* \\ \end{align*} \] and so on. The alternatives for STATEMENT would best be generated directly from the macro definitions. References [Dodgson 97] Charles Lutwidge Dodgson, [Naur, et al. 60] Peter Naur (editor) "Report on the Algorithmic Language ALGOL 60", [Irons 70] Edgar T. Irons, "Experience with an Extensible Language," The relationships among some of the properties discussed are shown in the following Venn diagram: *indicates subclasses for which my contrived examples appear to be contrived (3=X, meaning output X to device 3; and any field after its trivial definition has been deleted). An amusing account of Venn's Method of Diagrams may be found in [Dodgson 97: 174-176] with some historical perspective. Oriental animal cycle of years, adapted from I Ching, the Book of Changes. Yang and yin symbol at center represents duality in much of Chinese tradition and philosophy.
{"Source-Url": "http://www.softwarepreservation.org/projects/SETL/setl/newsletter/setl_059rev_1972-02-29.pdf", "len_cl100k_base": 13055, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 52211, "total-output-tokens": 14742, "length": "2e13", "weborganizer": {"__label__adult": 0.0003097057342529297, "__label__art_design": 0.0006256103515625, "__label__crime_law": 0.0002579689025878906, "__label__education_jobs": 0.0009899139404296875, "__label__entertainment": 9.644031524658204e-05, "__label__fashion_beauty": 0.0001385211944580078, "__label__finance_business": 0.0002455711364746094, "__label__food_dining": 0.0003058910369873047, "__label__games": 0.0008006095886230469, "__label__hardware": 0.001094818115234375, "__label__health": 0.0003294944763183594, "__label__history": 0.00031876564025878906, "__label__home_hobbies": 0.00019025802612304688, "__label__industrial": 0.00042557716369628906, "__label__literature": 0.000949382781982422, "__label__politics": 0.0002510547637939453, "__label__religion": 0.0006151199340820312, "__label__science_tech": 0.043365478515625, "__label__social_life": 0.00010067224502563477, "__label__software": 0.00980377197265625, "__label__software_dev": 0.93798828125, "__label__sports_fitness": 0.0002627372741699219, "__label__transportation": 0.0004870891571044922, "__label__travel": 0.0001531839370727539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45272, 0.00841]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45272, 0.66439]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45272, 0.81738]], "google_gemma-3-12b-it_contains_pii": [[0, 572, false], [572, 1379, null], [1379, 3177, null], [3177, 5430, null], [5430, 7669, null], [7669, 10139, null], [10139, 12021, null], [12021, 14076, null], [14076, 16407, null], [16407, 18013, null], [18013, 20036, null], [20036, 21761, null], [21761, 24341, null], [24341, 26337, null], [26337, 27751, null], [27751, 29750, null], [29750, 32027, null], [32027, 34341, null], [34341, 36326, null], [36326, 38047, null], [38047, 40055, null], [40055, 41972, null], [41972, 43397, null], [43397, 44709, null], [44709, 45104, null], [45104, 45272, null]], "google_gemma-3-12b-it_is_public_document": [[0, 572, true], [572, 1379, null], [1379, 3177, null], [3177, 5430, null], [5430, 7669, null], [7669, 10139, null], [10139, 12021, null], [12021, 14076, null], [14076, 16407, null], [16407, 18013, null], [18013, 20036, null], [20036, 21761, null], [21761, 24341, null], [24341, 26337, null], [26337, 27751, null], [27751, 29750, null], [29750, 32027, null], [32027, 34341, null], [34341, 36326, null], [36326, 38047, null], [38047, 40055, null], [40055, 41972, null], [41972, 43397, null], [43397, 44709, null], [44709, 45104, null], [45104, 45272, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45272, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45272, null]], "pdf_page_numbers": [[0, 572, 1], [572, 1379, 2], [1379, 3177, 3], [3177, 5430, 4], [5430, 7669, 5], [7669, 10139, 6], [10139, 12021, 7], [12021, 14076, 8], [14076, 16407, 9], [16407, 18013, 10], [18013, 20036, 11], [20036, 21761, 12], [21761, 24341, 13], [24341, 26337, 14], [26337, 27751, 15], [27751, 29750, 16], [29750, 32027, 17], [32027, 34341, 18], [34341, 36326, 19], [36326, 38047, 20], [38047, 40055, 21], [40055, 41972, 22], [41972, 43397, 23], [43397, 44709, 24], [44709, 45104, 25], [45104, 45272, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45272, 0.02569]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
0c67ffb6149376b90a822aa43eef6eb459fd8f5c
[REMOVED]
{"len_cl100k_base": 9177, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 30604, "total-output-tokens": 11628, "length": "2e13", "weborganizer": {"__label__adult": 0.0005993843078613281, "__label__art_design": 0.0018033981323242188, "__label__crime_law": 0.00042724609375, "__label__education_jobs": 0.0024166107177734375, "__label__entertainment": 0.00118255615234375, "__label__fashion_beauty": 0.00047659873962402344, "__label__finance_business": 0.0005841255187988281, "__label__food_dining": 0.0005245208740234375, "__label__games": 0.0019702911376953125, "__label__hardware": 0.0032138824462890625, "__label__health": 0.0007252693176269531, "__label__history": 0.0004794597625732422, "__label__home_hobbies": 0.00016748905181884766, "__label__industrial": 0.0003633499145507813, "__label__literature": 0.00179290771484375, "__label__politics": 0.0003063678741455078, "__label__religion": 0.0005445480346679688, "__label__science_tech": 0.327392578125, "__label__social_life": 0.0006127357482910156, "__label__software": 0.2313232421875, "__label__software_dev": 0.422119140625, "__label__sports_fitness": 0.00032901763916015625, "__label__transportation": 0.0004162788391113281, "__label__travel": 0.00023257732391357425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48086, 0.03701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48086, 0.16805]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48086, 0.87746]], "google_gemma-3-12b-it_contains_pii": [[0, 4990, false], [4990, 11618, null], [11618, 15599, null], [15599, 19145, null], [19145, 23905, null], [23905, 29574, null], [29574, 36493, null], [36493, 39183, null], [39183, 48086, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4990, true], [4990, 11618, null], [11618, 15599, null], [15599, 19145, null], [19145, 23905, null], [23905, 29574, null], [29574, 36493, null], [36493, 39183, null], [39183, 48086, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48086, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48086, null]], "pdf_page_numbers": [[0, 4990, 1], [4990, 11618, 2], [11618, 15599, 3], [15599, 19145, 4], [19145, 23905, 5], [23905, 29574, 6], [29574, 36493, 7], [36493, 39183, 8], [39183, 48086, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48086, 0.06849]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
9a2c100886b1fcb230d688d6963a9ff061d5f14b
Answer-Set Programming with Bounded Treewidth Michael Jakl, Reinhard Pichler and Stefan Woltran Institute of Information Systems, Vienna University of Technology Favoritenstrasse 9–11; A-1040 Wien; Austria {jakl, pichler, woltran}@dbai.tuwien.ac.at Abstract In this paper, we present a novel approach to the evaluation of propositional answer-set programs. In particular, for programs with bounded treewidth, our algorithm is capable of (i) computing the number of answer sets in linear time and (ii) enumerating all answer sets with linear delay. Our algorithm relies on dynamic programming. Therefore, our approach significantly differs from standard ASP systems which implement techniques stemming from SAT or CSP, and thus usually do not exploit fixed parameter properties of the programs. We provide first experimental results which underline that, for programs with low treewidth, even a prototypical implementation is competitive compared to state-of-the-art systems. 1 Introduction Over the past decade, Answer-Set Programming (ASP, for short) [Marek and Truszczyński, 1999; Niemelä, 1999], also known as A-Prolog [Baral, 2002], has become an increasingly acknowledged paradigm for declarative programming. The basic idea of ASP is to encode solutions to a problem into the models of a program in such a way that the solutions are described in terms of rules and constraints. ASP enjoys a large collection of successful applications in the areas of AI and KR showing the potential of this paradigm. However, the underlying complexity of evaluating propositional disjunctive programs (which are the objects we deal with here) shows that the problems ASP has to deal with are highly intractable: the decision problems are located on the second level of the polynomial hierarchy (see [Eiter and Gottlob, 1995]), and the problem of counting all answer sets can analogously be shown to be #NP-complete. An interesting approach to dealing with such intractable problems is parameterized complexity. In fact, hard problems can become tractable if some parameter problem is bounded by a fixed constant. Such problems are also called fixed-parameter tractable (FPT). One important parameter is treewidth, which measures the “tree-likeness” of a graph. By using a seminal result due to Courcelle [1990], several FPT results in the area of AI and KR have been recently proven tractable by Gottlob et al. [2006], among them also the problem of deciding ASP consistency (i.e. whether a disjunctive logic program has at least one answer set). Treewidth hereby has to be adapted suitably, for instance, by using the incidence graph of the program. However, an FPT result itself does not immediately lead to an efficient algorithm. Indeed, quite some work has been done within the last years to overcome this obstacle. We mention here only some recent results for counting problems: Samer and Szeider [2007] have proposed algorithms for #SAT (counting the number of models of a CNF formula) which follow the principle of dynamic programming; Jakl et al. [2008], on the other hand, map different counting problems to a certain (tractable) datalog fragment. Both approaches have in common that they use the concept of tree-decompositions and proceed by a bottom-up traversal of the tree, such that at each node a certain information about the subproblem (represented by the subtree rooted at n) is available. Consequently, results for the entire problem can be read off the root of the tree decomposition. Algorithms for counting problems are of particular interest here, since they are closely related to the problem of enumerating solutions, which is, of course, a central requirement in ASP. In this work, we generalize the dynamic programming approach for #SAT due to Samer and Szeider [2007] to the world of ASP in order to count and enumerate all answer sets of a given program. We thus provide a novel approach for computing answer sets, which significantly differs from standard ASP systems (see [Gebser et al., 2007] for an overview) that usually do not exploit fixed parameter properties. We implemented the proposed method in a prototype system. Our system should not be seen as competitor to general-purpose ASP solvers, but as an alternative for application scenarios where the problems possess low treewidth; usually the ASP encodings then have similarly low treewidth. Thorup [1998], for instance, shows that the treewidth of the control-flow graph of structured programs (more precisely, goto-free C programs) is at most six. A detailed discussion of applications to which treewidth has been successfully applied is given by Bodlaender [1993]. Results. Our main contributions are as follows. • An FPT algorithm for deciding ASP consistency in linear time w.r.t. the size of the program. • An FPT algorithm for counting the number of answer... sets in linear time w.r.t. the size of the program (assuming unit cost for arithmetic operations). - A novel method for enumerating all answer sets with linear delay. - Presentation of a first prototype implementation and some preliminary experimental results. 2 Preliminaries Throughout the paper, we assume a universe $U$ of propositional atoms. A literal is either an atom or a negated atom $\pi$. For a set $A$ of atoms, $\overline{A}$ denotes $\{\pi \mid \pi \in A\}$. Clauses are sets of literals. An interpretation $I$ is a set of atoms and we define, for a clause $c$ and $O \subseteq U$, $I \models c$ iff $(I \cap O) \cup (O \setminus I) \cap c \neq \emptyset$. For a set $C$ of clauses, $I \models C$ holds iff $I \models c$, for each $c \in C$. For $O = U$, we usually write $\models$ instead of $\models_O$. Answer-Set Semantics for Logic Programs. A propositional disjunctive logic program (or simply, a program) is a set of rules $\alpha_1 \lor \cdots \lor \alpha_k \leftarrow a_1, \ldots, a_n, \neg a_{n+1}, \ldots, \neg a_m$, $(n > 0, n \geq m \geq 0)$, where all $a_i$ are from $U$. A rule $r$ of this form consists of a head $H = \{a_1, \ldots, a_n\}$ and a body, given by $B^+(r) = \{a_{n+1}, \ldots, a_m\}$ and $B^-(r) = \{a_{m+1}, \ldots, a_n\}$. By $At(R)$ we denote the set of atoms occurring in program $R$. We often identify a program $R$ with the clause set $\{H(r) \cup B^+(r) \cup B^-(r) \mid r \in R\}$, and likewise, define the reduct $R^I$ of a program $R$ wrt. an interpretation $I$ as $\{H(r) \cup B^+(r) \cup B^-(r) \cap I = \emptyset, r \in R\}$. Following Gelfond and Lifschitz [1991], an interpretation $I$ is an answer set of a program $R$ iff $I \models R$ and for no $J \subset I$, $J \models R^I$. The set of all answer sets of a program $R$ is denoted by $\text{AS}(R)$. Example 2.1 We will use as a running example throughout the paper the program $P$ which consists of the following rules: \[ \begin{align*} r_1 &= u \leftarrow v, y; & r_2 &= z \leftarrow w; & r_3 &= v \leftarrow w; \\ r_4 &= w \leftarrow x; & r_5 &= x \leftarrow \neg y, \neg z. \end{align*} \] $P$ has a unique answer set $\{v, w, x\}$. Tree Decomposition and Treewidth. A tree decomposition of a graph $G = (V, E)$ is a pair $(T, \beta)$, where $T$ is a tree and $\beta$ maps each node $n$ of $T$ we use $n \in T$ as a shorthand below, to a bag $\beta(n) \subseteq 2^V$, such that the following conditions are met: - For each $v \in V$, there is an $n \in T$ such that $v \in \beta(n)$. - For each $(v, w) \in E$, there is an $n \in T$, s.t. $v, w \in \beta(n)$. - For any three nodes $n_1, n_2, n_3 \in T$, if $n_2$ lies on the path from $n_1$ to $n_3$, then $\beta(n_1) \cap \beta(n_3) \subseteq \beta(n_2)$. A tree decomposition $(T, \beta)$ is called normalized (or nice) [Kloks, 1994] if (i) each node in $T$ has at most two children; (ii) for each node $n$ with two children $n_1, n_2$, $\beta(n) = \beta(n_1) = \beta(n_2)$; and (iii) for each node $n$ with one child $n'$, $\beta(n)$ and $\beta(n')$ differ in exactly one element. The width of a tree decomposition is defined as the cardinality of its largest bag $\beta(n)$ minus one. It is known that every tree decomposition can be normalized in linear time without increasing the width. The treewidth of graph $G$, denoted as $tw(G)$, is the minimum width over all tree decompositions of $G$. For arbitrary but fixed $w \geq 1$, it is feasible in linear time to decide if a graph has treewidth $\leq w$ and, if so, to compute a tree decomposition of width $w$ [Bodlaender, 1996]. Tree Decompositions of Programs. To build tree decompositions for programs, we shall use incidence graphs.\footnote{See [Samer and Szeider, 2007] for other possible types of graphs and a discussion why incidence graphs are favorable.} Given a program $P$, such a graph has as vertices $R \cup \text{At}(R)$, and as edges all pairs $(a, r)$ with an atom $a$ appearing in a rule $r$ of $P$. In case of normalized tree decompositions, we distinguish between six types of nodes: atom introduction (AI), rule introduction (RI), atom removal (AR), rule removal (RR), branch (B), and leaf (L) nodes. The first four types are usually augmented with the element $e$ (either an atom or rule) which is removed or added compared to the bag of the child node. Example 2.2 Figure 1 shows the incidence graph $G_P$ of example program $P$ (left) and a normalized tree decomposition $T$ of $G_P$. \[\text{Figure 1: Incidence graph } G_P \text{ of example program } P \text{ (left).}\] how to use this method to count and enumerate the answer sets of program $R$ from a given tree decomposition $T$ for $R$. ### 3.1 Tree interpretations **Definition 3.1** A tree interpretation for $T$ ($T$-interpretation, for short) is a tuple $(n, M, C)$ where $n \in T$ is a node, $M \subseteq \beta(n)$ is called assignment, and $C \subseteq 2^{\beta(n)}$ is called certificate. The basic intuition behind $T$-interpretations is as follows: the assignment $M$ of a $T$-interpretation $(n, M, C)$ contains an interpretation $A_M$ over $\beta(n)$ (implicitly it refers to interpretations $I$ over $A(n)$) together with rules $r \in R_{\beta(n)}$ satisfied by $I$, i.e., $I \models A_M$ $r$. Certificate $C$ can be understood as a set of assignments and carries interpretations (together with satisfied rules in $(R_{\beta(n)})^T$) which are in a certain subset-relation to $M$. The following definitions make this more precise. **Definition 3.2** Given $n \in T$ and $I, J \subseteq A(n)$, define $\text{SAT}_n(I) = \{ r \mid r \in R(n), I \models A(n) r \}$ and $\text{RSAT}_n(J, I) = \{ r \mid r \in R(n), s.t. J \models A(n) r \text{ or } \text{b}^{-}(r) \cap I \neq \emptyset \}$. Roughly speaking, $\text{SAT}_n(I)$ yields those rules of $R$ which occur in bags of the subtree $T_n$ and are satisfied by $I$. Analogously, $\text{RSAT}_n(J, I)$ yields such rules which are either satisfied by $J$ or not contained in the reduct $T^I$ (thus we can view them as satisfied by $J$ in a trivial way). **Definition 3.3** Let $(n, M, C)$ be a $T$-interpretation, $I \subseteq A(n)$, and let $R^* = R_M \cup R(n)$. We define $$e_n(M) = \{ A_M \cup K \mid K \subseteq A(n), \text{SAT}_n(A_M \cup K) = R^* \};$$ $$r_n(M, I) = \{ A_M \cup K \mid K \subseteq A(n), \text{RSAT}_n(A_M \cup K, I) = R^* \}.$$ Moreover, $C$ is called valid wrt. $I$ in $\theta$, i.e., if each $N \subseteq \beta(n)$, it holds that $N \in C$ iff there exists $J \in e_n(N, I)$ s.t. $J \subseteq I$. The rationale behind $e_n(M)$ is to yield those extensions of the interpretation $A_M$ stored in the assignment $M$ of a $T$-interpretation $\theta = (n, M, C)$ to an interpretation $I$ over $A(n)$ (i.e., over all atoms occurring in bags of $T_n$), such that the rules $R_M$ plus all rules in $R(n)$ (i.e., all rules occurring in bags of $T_n$, but below $n$) are satisfied by $I$. A similar idea is followed by $r_n(M, I)$ which additionally takes the concept of reduct into account. We are now prepared to define the mapping $E(\cdot)$ and we shall see that for certain $T$-interpretations $\theta, E(\theta) \subseteq AS(R)$. **Definition 3.4** For a $T$-interpretation $\theta = (n, M, C)$, let $E(\theta) = \{ I \mid I \in e_n(M) \text{ and } C \text{ is valid wrt. } I \in \theta \}$. **Definition 3.5** A $T$-interpretation $(n, M, C)$ is called a root model for $T$ iff $n = rt$, $R_M = R_{\beta(rt)}$ and, for each $N \in C$, $R_N \subseteq R_{\beta(rt)}$. **Theorem 3.6** Let $\Theta$ be the set of all root models for $T$. Then, $AS(R) = \bigcup_{\theta \in \Theta} E(\theta)$. **Proof.** We only show the $\subseteq$-direction. The $\supseteq$-direction is proved analogously. We write $A_{\beta(rt)}$ as shorthand for $A_{\beta(rt)}$ and likewise $R_{\beta(rt)}$ for $R_{\beta(rt)}$. Let $I \in AS(R)$ and let $\theta = (rt, M, C)$ with $M = (I \cap A_{\beta(rt)}) \cup R_{\beta(rt)}$ and $C = \{ N \subseteq \beta(rt) \mid \exists J \in r_{\beta(rt)}(N, I) \text{ s.t. } J \subseteq I \}$. Note that $C$ is thus valid wrt. $I$ in $\theta$. It remains to show (i) $I \in e_n(M)$ and (ii) $I \in \Theta$. (i) holds since $R_M \cup R_{\beta(rt)} = R_{\beta(rt)} \cup R_{\beta(rt)} = R$, and $I \models R$ by assumption $I \in AS(R)$. For (ii), we have $R_M = R_{\beta(rt)}$ by definition. We show that for each $N \in C$, $R_N \subseteq R_{\beta(rt)}$ holds. Suppose this is not the case, i.e., let $N \in C$, such that $R_N = R_{\beta(rt)}$. By definition of $r_{\beta(rt)}(N, I)$, there exists a $J \subseteq I$, such that for each $r \in R = R_{\beta(rt)} \cup R_{\beta(rt)}$, either $J \models r$ or $b^{-}(r) \cap I \neq \emptyset$. Hence, $J \models R^I$, a contradiction to $I \in AS(R)$. ### 3.2 Tree models Theorem 3.6 tells us that tree interpretations $\theta$ which satisfy $E(\theta) \neq \emptyset$ are of particular interest. **Definition 3.7** A $T$-interpretation $\theta$ is called tree model of $T$ ($T$-model, for short) iff $E(\theta) \neq \emptyset$. A $T$-model that is also a root model for $T$ is called $T$-root-model. For leaf nodes $n$, tree models can be determined as follows. For every $M \subseteq \beta(n)$, we either have $e_n(M) = A_M$ in case $R_M = \{ r \mid r \in R_{\beta(n)}, A_M \models A(n) r \}$, or $e_n(M) = \emptyset$, otherwise. Hence, to compute all $T$-models for a leaf node $n$, one considers each $A_M \subseteq A(n)$ and determines $R_M = \{ r \mid r \in R_{\beta(n)}, A_M \models A(n) r \}$; then $A_M \cup R_M$ yields the assignment $M$ for a $T$-model $(n, M, C)$. Certificate $C$ is given by all $J \subseteq A_M$ together with the rules $r \in R_{\beta(n)}$, for which either $J \models A_{\beta(n)} r$ or $b^{-}(r) \cap A_M \neq \emptyset$ holds. **Example 3.8** Take our example tree-decomposition in Figure 1 and consider leaf node $n_9$. We have $\beta(n_9) = \{ u, r_1, r_2 \}$. Recall $r_1 = u \leftarrow v, y$ and $r_2 = z \leftarrow u$. We first set $u$ to true. This satisfies $r_1$, i.e. $\{ u \} \models_{u} r_1$. For the corresponding certificate, there is only one possibility: We set $u$ to false and observe that this satisfies $r_2$. Hence, $\{ n_9, \{ u, r_1 \}, \{ \{ r_2 \} \} \}$ is a $T$-model. Another $T$-model is $\{ n_9, \{ r_2 \} \}$ and these are the only $T$-models for $n_9$. We next define a relation $\prec_T$ between $T$-interpretations. The concrete definition depends on the node type. We first give the definition for the removal and introduction nodes. **Definition 3.9** For $T$-interpretations $\theta = \{ n, M, C \}$ and $\theta' = \{ n', M', C' \}$, we have $\theta \prec_T \theta'$ iff $\theta$ has a single child $n'$, and (depending on the node type of $n$) the conditions depicted in the table of Figure 2 are fulfilled. **Example 3.10** Recall $T$-model $\theta_8 = \{ n_8, \{ u, r_1 \}, \{ \{ r_2 \} \} \}$. To obtain $T$-models for $r_2$ (which is a (u-AR) node) from $\theta_8$ we have to remove all occurrences of $u$ in $\theta_8$, i.e. we get $\theta_8 \prec_{T} \theta_7 = \{ n_7, \{ z, r_1, r_2 \} \} + m_8 z$ (for the definition of operators as $+$, see Figure 2). The certificate consists of $N_1 = \{ r_1 \} \times m_8 z, N_2 = \{ z, r_2 \} + m_8 z$ and $N_3 = \{ r_2 \} + m_8 z$. Hence, $\theta_7 \prec_{T} \{ n_8, \{ z, r_1, r_2 \}, \{ N_1, N_2, N_3 \} \} = \theta_0'$ (also note here that $R_{n_8} = \emptyset$). Second, we set the new atom $z$ to false, which yields $\theta_0'' = \{ m_8, \{ r_1 \}, \{ N_3 \} \}$. For the next node $n_5$, which is of type (r2-RR), thus only $\theta_0''$ plays a role since its assignment contains $r_2$ and we obtain $\theta_0''' \prec_{T} \{ n_5, \{ z, r_1 \}, \{ \{ z \} \} \} = \theta_5$ (the set $\{ r_1 \}$ from the certificate of $\theta_0''$ also drops out, since it does not contain $r_2$). Finally, we use $\theta_0''$ to compute $T$-models of $n_4$, an (r5-RI) node. We observe that \( \{ z, r_1, r_5 \} = \{ z, r_1 \} \uplus n_4 \uplus r_5 \) since \( r_5 = x \leftarrow \neg y, \neg z \) contains \( z \) negated. For the same reason, \( r_5 \) is added to all sets of the certificate. We obtain \( \theta_5 \prec_{T} (n_4, \{ z, r_1, r_5 \}, \{ z, r_1 \}, \{ r_5 \}) \). We refer already to Figure 4 (which is explained in detail later) to follow this sequence of \( T \)-models. For branch nodes, we partially extend (with a slight abuse of notation) \( \prec_{T} \) to a ternary relation as follows. **Definition 3.11** For \( T \)-interpretations \( \theta = (n, M, C), \theta_1 = (n_1, M_1, C_1), \theta_2 = (n_2, M_2, C_2) \) we have \( \theta_1, \theta_2 \prec_{T} \theta \) iff the following conditions hold: (1) \( n_1 \) and \( n_2 \) are the two children of \( n \); (2) \( M_1 \uplus M_2 = M \); (3) \( C_1 \) is given by the set \( \{ C \} \uplus C_2 \cup \{ C \times C \} \); (4) \( C \in C_1 \uplus C_2 \) is defined as \( \{ C \} \uplus C \in C_1 \uplus C_2 \). Example 3.12 Consider \( \theta' = (n_{11}, \{ w, r_5 \}, \emptyset, \{ w \}) \) and \( \theta'' = (n_{12}, \{ w' \}, \emptyset, \{ r_1 \}) \). Both are \( T \)-models. We determine a \( \theta \) for branch node \( n_{10} \) such that \( \theta', \theta'' \prec_{T} \theta \). Such \( \theta \) exists (since \( w \) is true in the assignment of both \( \theta' \) and \( \theta'' \), and is of the form \( (n_{10}, \{ w, r_5 \}, C) \) with \( C = \emptyset, \{ w \}, \{ r_1 \} \) obtained as follows: \( \{ w \} \times \emptyset \times \{ r_1 \} = \emptyset, \{ w \} \times \{ w \} \times \emptyset = \emptyset \). The following lemma is central. **Lemma 3.13** Let \( \theta = (n, M, C) \) be a \( T \)-interpretation. If \( n \) is of type \( (RR), (RI), (AR) \), or of type \( (a, AI) \) and \( a \notin M \), then \( E(\theta) = \emptyset \). If \( n \) is of type \( (a, AI) \) and \( a \in M \), then \( E(\theta) = \bigcup_{\theta' \prec_{T} \theta} E(\theta') \cup \{ a \} \). If \( n \) is a branch node, then \( E(\theta) = \bigcup_{\theta' \prec_{T} \theta} \{ I_1 \cup I_2 \mid I_1 \in E(\theta_1), I_2 \in E(\theta_2) \} \). **Proof (Sketch).** Due to space reasons, we only show the case of an \( (r, RR) \) node here. The other cases are similar. In what follows, let \( n' \) be the child of \( M \) and \( M = \bigcup \{ r \} \). First, note that if \( I \in e_n(M') \) iff \( I \in e_n'(M') \). Indeed, since \( \{ r \} = R_n \uplus R_n \uplus R_n \uplus R_n \uplus R_n \uplus R_n \uplus R_n \uplus R_n \) and \( \text{Sat}_n(I) = \text{Sat}_n'(I) \). Similarly, one can show that \( J \in e_n(M, I) \) iff \( J \in e_n'(M', I) \), for any \( I, J \subseteq A_n \). Let \( I \in E(\theta) \) and \( \theta' = (n', M', C_1 \cup C_2) \), where \( C_1 = \{ C \times \{ r \} \mid C \in C \} \) and \( C_2 = \{ N \times n \} \) for \( r \notin N, \forall J \subseteq I \) s.t. \( J \in e_n'(N, I) \). In fact, \( \theta' \prec_{T} \theta \) holds. We show \( I \in E(\theta') \). By the observation above, we have \( I \in e_{n'}'(M') \). To show that \( C_1 \cup C_2 \) is valid wrt. \( I \) in \( \theta' \), suppose it is not the case, i.e. there exists an \( N' \subseteq \beta(n') \) such that \( N' \not\subseteq C_1 \cup C_2 \) or there exists a \( J \in e_{n'}'(N', I) \), such that \( J \subseteq I \). By definition of \( C_2 \), \( r \in N' \) has to hold. We know \( J \in e_{n'}'(N', I) \) iff \( J \in e_{n'}'(N', I) \). But then, for \( N = N' \uplus \{ r \} \), either \( N \notin C \) or there exists a \( J \in e_{n'}'(N, I) \), such that \( J \subseteq I \). This yields that \( C \notin \text{valid wrt. } I \in \theta \). A contradiction to \( I \in E(\theta) \). Let \( I \in E(\theta') \) for a \( \theta' = (n', M', C') \), such that \( \theta' \prec_{T} \theta \). By definition of \( \prec_{T} \), \( M' \) is of the form \( M \cup \{ r \} \). Using our previous observation, we get \( I \in e_n(M) \). Since \( I \in E(\theta') \), \( C' \) is valid wr. \( I \) in \( \theta' \), and we know \( C' \subseteq \{ C \} \uplus C \in C \in C' \), since \( \theta' \prec_{T} \theta \). We show that \( C \) is valid wrt. \( I \) in \( \theta \), which will imply \( I \in E(\theta) \). Again, suppose \( C \notin \text{valid wrt. } I \). \( I \) in \( \theta \), i.e. there exists an \( N \subseteq \beta(n) \), such that \( \text{either } N \notin C \) or there exists \( J \in e_{n'}'(N, I) \), such that \( J \subseteq I \). We know that \( J \in e_{n'}'(N, I) \) as well, which is in contradiction to the assumption that \( C' \) is valid wrt. \( I \) in \( \theta' \). **Corollary 3.14** Let \( \theta, \theta', \theta'' \) be \( T \)-interpretations, such that \( \theta' \prec_{T} \theta \) (resp. \( \theta', \theta'' \prec_{T} \theta \)). Then, \( \theta \) is a \( T \)-model iff \( \theta' \) is \( T \)-model (resp. both \( \theta' \) and \( \theta'' \) are \( T \)-models). **Theorem 3.15** Deciding \( AS(R) \neq \emptyset \) can be done in time \( O(f(w) \cdot |R|) \), where \( w \) denotes the treewidth of \( R \) and \( f \) is a function that only depends on \( w \) but not on \( R \). **Proof (Sketch).** Corollary 3.14 suggests the following algorithm: first, we compute the \( T \)-models of leaf nodes, then we compute all remaining \( T \)-models via \( \prec_{T} \) in a bottom-up manner. As soon as we have the \( T \)-models for the root node, we check whether they include also a root model for \( T \). The effort needed for processing a leaf node as well as for the transition from the child node(s) to the parent only depends on the treewidth but not on \( R \). Moreover, the size of \( T \) is linearly bounded by the size of \( R \). Hence, this algorithm has the desired time bound. The correctness of this algorithm immediately follows from Theorem 3.6, i.e.: \( AS(R) \neq \emptyset \) holds only if there exists at least one \( T \)-root-model. **Theorem 3.15** is the desired FPT result for the ASP consistency problem. Indeed, if the treewidth of \( w \) is bounded by a constant, then \( AS(R) \neq \emptyset \) can be decided in linear time. **Example 3.16** The \( T \)-models for our running example are depicted in Figure 4, where we grouped them wrt. their nodes and along the structure of the tree of \( T \). \( T \)-models which contribute to the single \( T \)-root-model \( {n_1, \{ r_1, r_2 \}, \{ \{ r_1 \} \}} \) are marked with \( "+" \). Following the branches and using the T-models marked with \( "+" \), one can see that the used atoms are \( v, w, x \) which exactly yields the answer set of our example program \( P \). For illustration, we depict for those \( T \)-models \( \theta \) also the set \( \mathcal{E}(\theta) \) in the last column (as mentioned before, this set is not explicitly computed). Finally, \( \# \) refers to a function which we define in the next section for counting answer sets. ### 3.3 Counting and Enumerating Answer Sets The following observation is important and together with Lemma 3.13 lays the foundation for our counting algorithm. **Lemma 3.17** For two distinct \( T \)-interpretations \( \theta_1 = (n, M_1, C_1) \) and \( \theta_2 = (n, M_2, C_2) \), \( \mathcal{E}(\theta_1) \cap \mathcal{E}(\theta_2) = \emptyset \) holds. **Proof** (Sketch). Suppose to the contrary that there exists an assignment \( I \in \mathcal{E}(\theta_1) \cap \mathcal{E}(\theta_2) \). We show that then \( \theta_1 = \theta_2 \). By definition of \( \mathcal{E}(\cdot) \), \( A_{M_1} = A_{M_2} \). Moreover, there exists a \( K \subseteq A[n] \) such that \( \text{Sat}_n(A_{M_1} \cup K) = R_{M_1} \cup R[n] \). By \( R_{M_1} \cap R[n] = \emptyset \) and \( R_{M_1} \cup R[n] = R_{M_2} \cup R[n] \) we conclude \( R_{M_1} = R_{M_2} \). Thus \( M_1 = M_2 \). Finally, \( C_1 = \{ \mathcal{N} \subseteq \beta(n) \mid \exists J \in r_{\mathcal{N}}(J, I), \text{ s.t. } J \subseteq I \} \). Hence, \( \theta_1 = \theta_2 \). Next, we recursively define a mapping from \( T \)-interpretations to numbers. **Definition 3.18** Let \( \theta \) be a \( T \)-interpretation for node \( n \). If \( \theta \) is not a \( T \)-model, let \( \#(\theta) = 0 \), otherwise let \[ \#(\theta) = \begin{cases} 1 & \text{if } \theta \text{ is leaf node} \\ \#(\theta') + \#(\theta'') & \text{if } \theta \text{ has one child} \\ \sum_{(\theta', \theta'') \prec \prec \theta} \#(\theta') \cdot \#(\theta'') & \text{if } \theta \text{ is branch node} \end{cases} \] Using Theorem 3.6 and Lemma 3.13 and 3.17, we obtain **Theorem 3.19** Let \( \Theta \) be the set of all root models of \( T \). Then, \( |\mathcal{AS}(R)| = \sum_{\theta \in \Theta} \#(\theta) \). Using the same algorithm as sketched in the proof of Theorem 3.15, plus keeping track of the \( \# \) values for \( T \)-models, we immediately obtain the following result. **Theorem 3.20** Assuming unit cost for arithmetic operations, \( |\mathcal{AS}(R)| \) can be computed in time \( O(f(w) \cdot |R|) \), where \( w = \text{tw}(R) \) and \( f \) is a function depending on \( w \) but not on \( R \). For the enumeration problem, we provide in Figure 3 an algorithm which, given a \( T \)-root-model \( \theta = (n, M, C) \), computes the set \( \mathcal{E}(\theta) \) step by step, such that each new element in \( \mathcal{E}(\theta) \) requires only linear delay. To this end, we consider for a given \( T \)-model \( \theta \), all \( \theta' \) such that \( \theta' \prec \prec \theta \) (resp. all \( (\theta', \theta'') \) such that \( (\theta', \theta'') \prec \prec \theta \)) as stored in an ordered list, and for each such list we use an internal pointer \( p_0 \). Function initialize resets all pointers to the first \( \theta' \) (resp. to the first pair \( (\theta', \theta'') \)) in such a list. Function get_current(\( \theta \)) yields the object \( p_0 \) currently refers to. Function get_next(\( \theta \)) either moves the pointer to the next \( \theta' \) (resp. to the next pair \( (\theta', \theta'') \)) in the list and returns 0; or in case the last element was already reached, it resets \( p_0 \) to the first element in the list and returns 1. Due to the space restrictions, we cannot discuss the algorithm in detail. However, we note that in general one \( T \)-root-model may refer to multiple answer sets. Thus we have to reconstruct all possible such models by traversing the tree downwards – and collect all atoms set to true in at least some assignment – for all possible combinations of \( T \)-models related via \( \prec \). ```plaintext Function getAS(\( \theta, I, flag \)) input: (\( T \)-interpretation \( \theta = (n, M, C) \), interpretation, Boolean) return: (interpretation, Boolean) begin if \( n \) is leaf-node then return(\( I, flag \)); if \( n \) is branch node then (\( \theta', \theta'' \)) = get_current(\( \theta \)); (\( J, flag' \)) = getAS(\( \theta', I, flag \)); (\( K, flag'' \)) = getAS(\( \theta'', I, flag \)); else \( \theta' = (n', M', C') \) = get_current(\( \theta \)); (\( K, flag'' \)) = getAS(\( \theta', I, flag \)); if \( n \) is of type \( (e-AR) \) and \( e \in M' \) then \( K = K \cup \{ e \} \); endif if flag'' then return(\( K, get_next(\( \theta \)) \)); return(\( K, 0 \)); end Program enumerateAS begin for each \( T \)-root-model \( \theta = (rt, M, C) \) do initialize; repeat (\( I, flag \)) = getAS(\( \theta, A_M, 1 \)); output I; until flag; done end ``` Our ASP algorithm first computes all \( T \)-models as sketched in the proof of Theorem 3.15 (within this algorithm, we already keep track of the information used later by the pointers \( p_0 \)). Then the program enumerateAS iterates through all \( T \)-root-models and outputs the corresponding answer sets. **Theorem 3.21** Program enumerateAS works in space \( O(f(w) \cdot |R|) \) and outputs all elements in \( \mathcal{AS}(R) \) with delay \( O(f(w) \cdot |R|) \), where \( w \) denotes the treewidth of \( R \) and \( f \) is a function that only depends on \( w \) but not on \( R \). ### 4 Implementation and Results For our implementation, we have chosen Haskell, a programming language with lazy semantics [Josephs, 1989], thus the desired linear delay is implicit in the evaluation strategy of the language (a computation is only executed when needed). We call our prototype LAPS (lazy answer-set programming system). The performance of our straightforward implementation is unprecedented for counting, and is very competitive at low treewidths (up to six) for enumerating the answer sets. Given its early stage of development, LAPS has a high potential for further improvements. We split the evaluation into four steps: (1) parse a disjunctive logic program and generate the data structures for our target language. (2) build the incidence graph of the program and decompose the graph using heuristic methods [Dermaku et al., 2005]. The decomposition is then provided as a data structure for the target language. (3) all parts are merged with the algorithm, compiled and (4) executed. Figure 5 summarizes the runtime behavior of LAPS (assuming the tree decomposition is already given) compared to DLV on a set of randomly generated programs. The first Figure 3: Program enumerateAS. Figure 4: The $\mathcal{T}$-models of the tree decomposition $\mathcal{T}$ for our running example program $P$. We abbreviate sets of atoms and rules via strings. A list of string stands for a set of sets, e.g. $\{w, r_4, x\}$ denotes $\{\{w\}, \{r_4\}, \{x\}\}$. The $\prec_T$ relation for $\theta^n_1 \prec_T \theta^n_2$ (resp. for $(\theta^n_1, \theta^n_k) \prec_T (\theta^n_1, \theta^n_k)$) can be read off columns “$j$” and “$i” (resp. “$(i, k)$”) in the table of node $n$. Figure 5: Comparison of the runtime behavior. row shows the time required to count the number of answer sets with increasing number of answer sets (here we fixed the treewidth to 5). Clearly DLV’s runtime is dependent on the number of answer sets, whereas LAPS’ runtime is not affected (excluding time required to handle very large integers). The second row shows the runtime required for enumerating the answer sets (time per answer set) with increasing treewidth. Here, LAPS’ runtime increases quickly with larger treewidth whereas DLV even seems to benefit from larger treewidth. The latter effect is due to the fact that our randomly generated programs tend to have more answer sets when the treewidth increases which — in case of DLV — decreases the average cost per answer set. Tests using smodels instead of DLV resulted in a very similar runtime behavior. 5 Related Work and Conclusion Another FPT result for ASP is due to Lin and Zhao [2004], who use the number of cycles in the (directed) dependency graph as parameter. A further interesting parameter here is the number of loops [Ferraris et al., 2006] of a program. We note that programs with an unbounded number of cycles and/or loops can still have low treewidth. Consider $P_1 = \{a_i \leftarrow b_i; b_i \leftarrow a_i \mid 1 \leq i \leq n\}$ or $P_2 = \{a_{i+1} \leftarrow a_i, a_i \leftarrow a_{i+1} \mid 1 \leq i \leq n\}$. Both have treewidth 2, but the number of cycles ($P_1$), or loops ($P_2$), clearly depends on $n$. The work most closely related to ours is by Samer and Szeider [2007], where the #SAT problem in case of bounded treewidth was solved by dynamic programming. We extend their approach to the counting problem (and also the enumeration problem) of ASP. To this end, we have to introduce sophisticated, additional data structures, which ultimately allow us to distinguish between arbitrary and minimal (wrt. to the reduct) models of a given program. Two related problems are constraint satisfaction problems (CSPs) and conjunctive query (CQ) evaluation, for which, apart from treewidth, further methods based on structural decomposition have been used to construct efficient algorithms [Gottlob et al., 2000; Chekuri and Rajaraman, 2000]. These methods usually also work by a bottom-up traversal of a tree structure. As with #SAT, the data propagated up the tree structure is much simpler than in case of ASP solving. On the other hand, the idea of postprocessing by a top-down traversal in order to compute all solutions is also present in the context of CQ evaluation. Recently, dynamic programming has also been applied to logic programming in the context of query answering over Semantic Web data [Ruckhaus et al., 2008]. In this work, dynamic programming is applied to the computation of an optimal join order for CQ evaluation over deductive databases. To summarize, we introduced in this work novel algorithms for ASP consistency, as well as for counting and enumerating answer sets. The algorithm runs in linear time (resp. with linear delay) if the treewidth of the logic programs is bounded by a constant. Our experiments indicate that this technique may lead to a promising alternative for evaluating ASP programs, if the treewidth remains low. Since tree decompositions of the logic programs are required for the algorithm, our approach will greatly profit from any future progress in the research for efficient tree-decomposition algorithms. Future research concerns investigations to improve the performance of the proposed method. This includes concepts like balanced and non-normalized tree-decompositions. As well, we plan to study methods for parallelization, which should be easily applicable to the tree-like structure of the required computations. Finally, we want to use lower-level languages (instead of Haskell, where we rely on the compiler to do adequate optimizations automatically) for our algorithms and perform optimizations by hand. Acknowledgement We are very grateful to Stefan Rümmele for valuable comments on a preliminary version of this paper. References
{"Source-Url": "http://www.aaai.org/ocs/index.php/IJCAI/IJCAI-09/paper/download/307/759", "len_cl100k_base": 10619, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 31085, "total-output-tokens": 12863, "length": "2e13", "weborganizer": {"__label__adult": 0.0004000663757324219, "__label__art_design": 0.0003695487976074219, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.0013017654418945312, "__label__entertainment": 0.0001043081283569336, "__label__fashion_beauty": 0.0002110004425048828, "__label__finance_business": 0.00031113624572753906, "__label__food_dining": 0.0005702972412109375, "__label__games": 0.0009207725524902344, "__label__hardware": 0.0009937286376953125, "__label__health": 0.0009236335754394532, "__label__history": 0.00035500526428222656, "__label__home_hobbies": 0.0001537799835205078, "__label__industrial": 0.0006532669067382812, "__label__literature": 0.0004992485046386719, "__label__politics": 0.0004091262817382813, "__label__religion": 0.0006866455078125, "__label__science_tech": 0.10040283203125, "__label__social_life": 0.00012230873107910156, "__label__software": 0.00789642333984375, "__label__software_dev": 0.880859375, "__label__sports_fitness": 0.00035691261291503906, "__label__transportation": 0.0007729530334472656, "__label__travel": 0.00022411346435546875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38342, 0.02303]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38342, 0.59323]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38342, 0.80804]], "google_gemma-3-12b-it_contains_pii": [[0, 4858, false], [4858, 9397, null], [9397, 16729, null], [16729, 23318, null], [23318, 30012, null], [30012, 32454, null], [32454, 38342, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4858, true], [4858, 9397, null], [9397, 16729, null], [16729, 23318, null], [23318, 30012, null], [30012, 32454, null], [32454, 38342, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38342, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38342, null]], "pdf_page_numbers": [[0, 4858, 1], [4858, 9397, 2], [9397, 16729, 3], [16729, 23318, 4], [23318, 30012, 5], [30012, 32454, 6], [32454, 38342, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38342, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
bfa9eacbcf4c41aa2d459581084a55ff9766a49b
Commentary on Practical Foundations for Programming Languages (Second Edition) Robert Harper Carnegie Mellon University March 27, 2019 It may be useful to many readers for me to explain the many decisions made in the organization and presentation of ideas in PFPL, and to suggest some variations and extensions that I have considered during and after writing the book. The discussion necessarily reflects my own opinions. Contents 1 Paradigms 2 2 Preliminaries 3 3 Statics and Dynamics 3 4 Partiality and Totality 5 5 Functions First, or Not 6 6 The Empty and Unit Types 7 7 Static and Dynamic Classification 7 8 Inductive and Coinductive Types 8 9 Recursion in PCF 10 10 Parallelism 10 11 Laziness and Eagerness 11 The earliest attempts to systematize the study of programming languages made use of the concept of a paradigm, apparently borrowed from Thomas Kuhn’s *The Structure of Scientific Revolutions*. It is common to classify languages into trendy-sounding categories such as “imperative”, “functional”, “declarative”, “object-oriented”, “concurrent”, “distributed”, and “probabilistic” programming. These classifications are so widespread that I am often asked to justify why I do not adhere to them. It’s a matter of taxonomy versus genomics, as described in Stephen Gould’s critique of cladistics (the classification of species by morphology) in his essay “What, if anything, is a zebra?” (Gould, 1983). According to Gould, there are three species of black-and-white striped horse-like animals in the world, two of which are genetically related to each other, and one of which is not (any more so than it is to any mammal). It seems that the mammalian genome encodes the propensity to develop stripes, which is expressed in disparate evolutionary contexts. From a genomic point of view, the clade of zebras can be said not to exist: there is no such thing as a zebra! It is more important to study the genome, and the evolutionary processes that influence it and are influenced by it, than it is to classify things based on morphology. Paradigms are clades; types are genes. 2 Preliminaries Part I, on syntax, judgments, and rules, is fundamental to the rest of the text, and is essential for a proper understanding of programming languages. Nevertheless, it is not necessary, or advisable, for the novice to master all of Part I before continuing to Part II. In fact, it might be preferable to skim Part I and begin in earnest with Part II so as to gain a practical understanding of the issues of binding and scope, the use of rules to define the statics and dynamics of a language, and the use of hypothetical and general judgments. Then one may review Part I in light of the issues that arise in a cursory study of even something so simple as a language of arithmetic and string expressions. Having said that, the importance of the concepts in Part I cannot be overstressed. It is surprising that after decades of experience languages are still introduced that disregard or even flout basic concepts of binding and scope. A proper treatment of binding is fundamental to modularity, the sole means of controlling the complexity of large systems; it is dismissed lightly at the peril of the programmer. It is equally surprising that, in an era in which verification of program properties is finally understood as essential, languages are introduced without a proper definition. It remains common practice to pretend that “precise English” is a substitute for, or even preferable to, formal definition, despite repeated failures over decades to make the case. All that is needed for a precise definition is in Part I of the book, and exemplified in the remainder. Why avoid it? 3 Statics and Dynamics Each language concept in PFPL is specified by its statics and its dynamics, which are linked by a safety theorem stating that they cohere. The statics specifies which are the well-formed expressions of a language using context-sensitive formation constraints. The dynamics specifies how expressions are to be evaluated, usually by a transition system defining their step-by-step execution and when evaluation is complete. The safety theorem says that well-formed programs are either completely evaluated or are susceptible to transition, and that the result of such a transition is itself well-formed. The statics of a language is usually an inductively defined typing judgment of the form $x_1 : \tau_1, \ldots, x_n : \tau_n \vdash e : \tau$, stating that an expression $e$ has the type $\tau$ uniformly in the variables $x_1, \ldots, x_n$ ranging over their specified types. The inductive definition takes the form of a collection of typing rules that jointly defined the strongest, or most restrictive, judgment closed under those rules. The minimality requirement gives rise to the important principle of induction on typing, an instance of the general concept of rule induction applied to the given typing rules. It is of paramount importance that the typing judgment obey the structural properties of a hypothetical general judgment as defined in Part I of the book. In particular typing must be closed under substitution, must assign the assumed type to each variable, and must be stable under adding spurious variable assumptions. Experience shows that languages that fail to validate the variable and substitution properties are inherently suspect, because they violate the mathematical concept of a variable developed since antiquity. The dynamics of a language is usually given as a transition judgment on execution states defining the atomic steps of execution together with a specification of what are the initial and terminal states. In many cases states are simply expressions, and transition is inductively defined by a collection of rules using Plotkin’s method of structural operational semantics. In more sophisticated settings the state may contain additional information such as the contents of memory or the presence of concurrently executing processes, but the general pattern remains the same. An SOS specification of a transition system is the universal format used throughout PFPL. Besides being fully precise and perspicuous, structural dynamics lends itself well to mechanization and verification of safety properties. Other methods of specifying the dynamics are of course possible. An abstract machine is a transition system whose defining rules are premise-free (that is, are all axioms, and never proper inference rules). Usually it is straightforward to derive an abstract machine from a structural dynamics by introducing components of the state that account for the implicit derivation structure in the latter formulation. An evaluation dynamics consists of an inductive definition of the complete value, if any, of an expression in terms of the values of its constituent expressions and their substitution in- stances. An evaluation dynamics may be regarded as a characterization of multistep evaluation to a value. It can be convenient in some situations to think directly in terms of the outcome of evaluation, rather than in terms of the process of reaching it. But doing so precludes speaking about the cost, or time complexity, of a program, which may be defined as the number of steps required to reach a value. A cost dynamics defines not only the value of an expression, but also the cost of achieving that value. It is possible to associate many different notions of cost to a program, such as the sequential or parallel time or space usage of a program. A cost dynamics is both more and less than an evaluation dynamics. Reading the cost and value as outputs, it specifies both the value and cost of an expression, but reading the cost as input, it specifies only the values of expressions that are achievable with the specified cost. There is no inherent reason to prefer one reading to the other. Transition systems are sometimes called “small step” dynamics, in contrast to evaluation relations, which are then called “big step” dynamics. There is no accounting for taste, but in my opinion evaluation dynamics does not define a transition relation, but an evaluation relation, and is not comparable as a matter of size with a proper transition system. Moreover, in many cases the values related to expressions by an evaluation (or cost) dynamics are not themselves forms of expression. In such cases it is senseless to treat evaluation as a kind of transition; it is, quite plainly, just evaluation. 4 Partiality and Totality A significant change in the second edition is that I have deferred discussion of partiality until the basics of type structure are established. The advantage of this approach is that it avoids distracting vacillations about partiality and totality in the midst of discussing fundamental issues of type structure. The disadvantage is that the deterministic dynamics is less well-motivated in the total case, and that many semantic properties of types do not carry over from the total to the partial case without signification complication (a prime example being the parametricity properties of type quantification.) For most practical purposes it is a bit unnatural to emphasize totality. Instead, it might be preferable to start out with PCF, rather than T, and omit discussion of inductive and coinductive types in favor of recursive types. The chief difficulty is that the discussion of representation indepen- dence for abstract types is no longer valid; one must weaken the results to take account of partiality, which complicates matters considerably by forcing consideration of admissible predicates and fixed point induction. Perhaps this is a situation where “white lies” are appropriate for teaching, but I chose not to fudge the details. 5 Functions First, or Not The illustrative expression language studied in Part II provides a starting point for the systematic study of programming languages using type systems and structural operational semantics as organizing tools. Systematic study begins in Part III with (total) functions, followed by $T$, a variation on Gödel’s formalism for primitive recursive functions of higher type. This line of development allows me to get to interesting examples straightaway, but it is not the only possible route. An alternative is to begin with products and sums, including the unit and empty types. The difficulty with this approach is that it is a rather austere starting point, and one that is not obviously motivated by the illustrative expression language, or any prior programming experience. The advantage is that it allows for the consideration of some very basic concepts in isolation. In particular, the booleans being definable as the sum of two copies of the unit type, one may consider binary decision diagrams as an application of sums and products. Bdd’s are sufficient to allow modeling of combinational logic circuits, for example, and even to generalize these to decision diagrams of arbitrary type. One can relate the formalism to typical graphical notations for bdd’s, and discuss equivalence of bdd’s, which is used for “optimization” purposes. One might even consider, in the latter event, the early introduction of fixed points so as to admit representation of digital logic circuits, such as latches and flip-flops, but this would conflict with the general plan in the second edition to separate totality from partiality, raising questions that might better be avoided at an early stage. On the other hand it is rather appealing to relate circular dependencies in a circuit to the fixed point theory of programs, and to show that this is what enables the passage from the analog to the digital domain. Without feedback, there is no state, without state, there is no computation. 6 The Empty and Unit Types One motivation for including the empty type, \texttt{void}, in Chapter 11 is to ensure that, together with the binary sum type, \(\tau_1 + \tau_2\), all finite sums are expressible. Were finite \(n\)-ary sums to be taken as primitive, then the \texttt{void} type inevitably arises as the case of zero summands. By the type safety theorem there are no values of type \texttt{void}; it is quite literally void of elements, and may therefore be said to be empty. It follows that, in a total language, there can be no closed expressions of type \texttt{void}, and, in a partial language, that any such closed expression must diverge. Otherwise the expression would have to evaluate to a value for \texttt{void}, and that it cannot do. The natural elimination form for an \(n\)-ary sum is an \(n\)-ary case analysis, which provides one branch for each of the \(n\) summands. When \(n = 0\) this would be a nullary case analysis of the form \texttt{case e \{ \}}, which evaluates \(e\) and branches on each of the zero possible outcomes. Regrettably, the standard notation used for a nullary case is \texttt{abort(e)}, which plainly suggests that it would induce some form of error during evaluation. But that is not at all the case! The unhappy choice of notation confuses many readers, and it is best to avoid it in favor of the more plainly self-explanatory nullary case notation. Because it is empty, an important use of the type \texttt{void} is to state that some expression must not return to the point at which it is evaluated. A particularly common use of \texttt{void} is in a type \(\tau \rightarrow \texttt{void}\), which classifies functions that, once applied, do not return a value to the caller (by diverging, by raising an exception, or throwing to a continuation). That type is quite different from the type \(\tau \rightarrow \texttt{unit}\), which, if it returns, returns the empty tuple (but might engender effects during execution). To be emphatic, not returning a value is different from returning an uninteresting value, yet, bizarrely, the industry standard is to confuse the two situations! It is standard practice to write \texttt{void} for what is properly called \texttt{unit}, and having no proper \texttt{void} type at all! Thus, in that notation, a function of type function of type \(\tau \rightarrow \texttt{void}\) may well return to its caller, and there is no way to notate a function that definitely does not return. Words matter; reduction of expressiveness matters even more. 7 Static and Dynamic Classification The importance of sum types for static classification of data cannot be overstated. They have long been ignored in seat-of-the-pants language de- signs, giving rise to absurdities such as “null pointers” in abstract languages (there are none), the notion of dynamic dispatch in object-oriented languages (it is a form of pattern matching, albeit without of exhaustiveness or coverage checking), dynamically typed languages (see Section 14), and to ad hoc notions such as enumeration types (which are but sums of unit type). Static classification naturally generalizes to dynamic classification, in which new classes may be generated at run-time. The same basic mechanisms extend smoothly from the static to the dynamic case, obviating the need for specialized mechanisms such as exception generation or communication channel allocation. Moreover, they provide a natural basis for ensuring confidentiality and integrity of data. Coupled with an independent type abstraction mechanism, one can create as many distinct dynamically classified types as one likes, simply by defining them all to be the one type of dynamically classified values and relying on abstraction to keep them distinct. The distinction between dynamic and static classification sheds light on a major deficiency of object-oriented languages. Because the totality of classes must be known statically, it follows that, for semantic reasons, that whole-program compilation is required, which completely defeats modular program development. This leads to an emphasis on “just in time” compilation, another word for whole-program compilation, to defer code generation until the class hierarchy is known. 8 Inductive and Coinductive Types The distinction between inductive and coinductive types is present only in the case of total languages; in languages with partiality they are subsumed by recursive types. For this reason it may be preferable to avoid the technical complexities of inductive and coinductive types, and simply work with recursive types in FPC. It is still worthwhile to discuss generic programming, which is of independent interest, and to discuss streams, which are characterized by their behavior, rather than their structure. Streams raise an interesting question about the distinction between partial and total languages. In the context of Chapter 15, it is not possible to write a filter function for streams that, given a binary predicate (a function of type nat → bool), produces the stream whose elements consist only of those elements for which the predicate evaluates to true. Intuitively, the code for filter does not know how long to wait for the next element to arrive; it can be arbitrarily far in the future of the stream, or even not exist at all if the predicate is always false! But it is in the nature of a total language to build in the “proof” that it terminates on all inputs. To implement \texttt{filter} requires an \textit{unbounded search} operation, using a fixed point construction that sacrifices guaranteed termination. The definition of positive type operator given in Chapter 14 can be reformulated using hypothetical judgments by maintaining a context $\Delta$ of hypotheses of the form $t_1 \text{pos}, \ldots, t_n \text{pos}$ as follows: \[ \begin{array}{c} \Delta \vdash \text{unit pos} \\ \Delta \vdash \text{void pos} \end{array} \] \[ \begin{array}{c} \Delta \vdash t_1 \text{ pos} & \Delta \vdash t_2 \text{ pos} \\ \Delta \vdash t_1 \times t_2 \text{ pos} & \Delta \vdash t_1 \text{ pos} & \Delta \vdash t_2 \text{ pos} \\ \Delta \vdash t_1 \text{ type} & \Delta \vdash t_2 \text{ pos} \end{array} \] The reflexivity and weakening properties of the hypothetical judgment ensure that $\Delta, t \text{ pos} \vdash t \text{ pos}$. But even though $\Delta \vdash \tau \text{ pos}$ implies that $\Delta \vdash \tau \text{ type}$, the entailment $\Delta, t \text{ pos} \vdash t \text{ type}$ is not valid. And this is a good thing, because this ensures that the domain of a function type be independent of the positive type variables in scope! For a given positive type operator $t.\tau$ pos, define $T(\sigma)$ to be the substitution instance $[\sigma/t]T$. The inductive and coinductive types may be represented in $\mathbf{F}$ as follows: \[ \begin{align*} \mu(t.\tau) & \triangleq \forall (r.(T(r) \to r) \to r) \\ fold\{t.\tau\}(e) & \triangleq \Lambda(r) \lambda (a:T(r) \to r) a(m(e)), \text{ where} \\ m(e) & \triangleq \text{map}\{t.\tau\}(x.rec\{t.\tau\}\{t\}(x.a(x);x))(e) \\ rec\{t.\tau\}\{\rho\}(x.e';e) & \triangleq e[\rho](\lambda (x:T(\rho)) e') \\ \nu(t.\tau) & \triangleq \exists (s.(s \to T(s)) \times s) \\ unfold\{t.\tau\}(e) & \triangleq \text{open e as s with m in g(m), where} \\ g(m) & \triangleq \text{map}\{t.\tau\}(x.rec\{t.\tau\}\{t\}(x.a(x);x))(m) \\ gen\{t.\tau\}\{\sigma\}(x.e';e) & \triangleq \text{pack}\sigma\text{ with }\langle \lambda (x:\sigma) e',e \rangle\text{ as }\nu(t.\tau) \end{align*} \] 9 Recursion in PCF Plotkin’s language PCF is often called the *E. coli* of programming languages, the subject of countless studies of language concepts. The formulation given here is intended to make clear the connections with $T$, and the classical treatment of recursive (that is, computable) functions. The relation to $T$ is important for explaining the significance of totality compared to partiality in a programming language. I have chosen to formulate PCF in Plotkin’s style, with a general fixed point operator, but, contrary to Plotkin, to admit both an eager and a lazy dynamics. This choice is not comfortable, because a general fixed point is a lazy concept in that the recursively defined variable ranges over computations, not merely values. I have waivered over this decision, but in the end can find no better way to expose the important ramifications of general recursion. Separating self-reference from functions emphasizes its generality, and helps refute a widespread misconception about recursive functions: no stack is required to implement self-reference. (Consider, for example, that a flip-flop is a recursive network of logic gates with no stack involved.) Stacks are used to manage control flow, regardless of whether function calls or recursion are involved; see Chapter 28 for a detailed account. The need for a stack to implement function calls themselves is a matter of re-entrancy, not recursion. Each application of a function instantiates its parameters afresh, and some means is required to ensure that these instances of the function are not confused when more than one application is active simultaneously. It is re-entrancy that gives rise to the need for additional storage, not recursion itself. 10 Parallelism There is an ambiguity in the cost semantics for parallelism regarding the distinction between an expression that is yet to be evaluated, but which happens to be a value, and an expression that has already been evaluated. In a by-value language variables stand for already-evaluated expressions, and hence, when encountered during evaluation, should not incur additional cost. On the other hand certain expressions, such as $\langle e_1, e_2 \rangle$, where $e_1$ and $e_2$ are values, should incur at least the cost of allocation. Never charging for value pairs is wrong, and charging for them each time they are encountered is also wrong (because substitution replicates their occur- The solution is to introduce a modal distinction between computations and values in the statics, and to impose costs only on computations. See the supplemental notes on call-by-value and parallelism for details (Harper, 2018b,c,d). 11 Laziness and Eagerness The distinction between laziness and eagerness is best explained in terms of the semantics of variables: do they range over all expressions of their type, or all values of their type? When all expressions have a unique value, the distinction seldom matters. But for languages that admit divergent, or otherwise undefined, expressions, the distinction is important, and affects the character of the language significantly. Whether one is more “mathematical” than the other, as is often claimed, is a matter of debate, because analogies to mathematics break down in the presence of partiality. Moreover, considerations of efficiency are fundamental for computation, but play no role in mathematics. Put another way, programs are not just proofs; they have a cost, and it matters. One might even say that cost is what distinguishes mathematics from computation; it is not to be expected that the two disparate subjects coincide. The distinction between laziness and eagerness is not wholly a matter of cost; it is also about expressiveness. In a lazy language all expressions—even the divergent ones—are considered to be values of their type. This makes it natural to consider general recursion (fixed points) at every type, even when it diverges. Pairing is lazy (neither component is evaluated until it is used), and functions are “call-by-name”, because even a divergent argument is a value. Neither of these is particularly problematic, but for sums the inclusion of divergence is disastrous. For example, there are not two, but three booleans, true, false, and divergence; even worse, they are not distinguishable by case analysis because it is impossible to detect divergence. Absurdly, the “empty” type has one element, namely divergence, and the “unit” type has two elements. Worse, besides divergence, sums also include injections of divergent values from the summands. It is possible to introduce “strict sums” that rule out injected divergence, but divergence itself is still present. Similarly, the so-called natural numbers, viewed as a recursive sum type, not only include divergence, but also an unavoidable infinite stack of successors. One may again consider a strict version of natural numbers, which rules out infinity, but in any case includes divergence. Consequently, reasoning by mathematical induction is never valid for a lazy language. What could be less mathematical than to lack a type of natural numbers? The problem is the very idea of a lazy language, which imposes a ruinous semantics on types. An eager semantics supports the expected semantics of types (for example, the natural numbers are the natural numbers) and, moreover, admits definition of the lazy forms of these types using suspension types (Chapter 36). A value of a suspension type is either a value of the underlying type, or a computation of such a value that may diverge. There being no representation of the eager types in a lazy language, it follows that, eager languages are strictly more expressive than lazy languages. Lazy languages are a mistake. [See also Harper (2018b).] 12 Self-Reference and Suspensions Suspension types are naturally self-referential, because of the indirection for memoization, so it makes the most sense to make them recursive. General recursion can be implemented using such suspensions, but it makes little sense to insist that recursion incur the overhead of memoization. Instead, one may consider other types whose elements are naturally self-referential, and arrange for that specifically. For example, one may consider that mutual recursion among functions is a primitive notion by introducing the \textit{n-ary \( \lambda \)-abstraction} \[ \lambda^n \{ \tau_1; \ldots ; \tau_n \} (x, y_1.e_1; \ldots ; x, y_n.e_n). \] The \( n \)-ary \( \lambda \) defines \( n \) mutually recursive functions, each abstracted on them all collectively as the first argument, \( x \), of each. (Typically \( x \) is spelled this or self, but it is a bad idea to attempt to evade \( \alpha \)-conversion in this way.) The \( n \)-ary \( \lambda \) evaluates to an \( n \)-tuple of ordinary \( \lambda \)-abstractions, as suggested by the following statics rule, which gives it a product-of-functions type: \[ \Gamma, x : \langle \tau_1 \rightarrow \tau_1', \ldots ; \tau_n \rightarrow \tau_n' \rangle, y_i : \tau_i \vdash e_i : \tau_i' \quad (1 \leq i \leq n) \] \[ \Gamma \vdash \lambda^n \{ \tau_1; \ldots ; \tau_n \} (x, y_1.e_1; \ldots ; x, y_n.e_n) : \langle \tau_1 \rightarrow \tau_1', \ldots ; \tau_n \rightarrow \tau_n' \rangle \] The dynamics implements the self-reference by unrolling: \[ \lambda^n \{ \tau_1; \ldots ; \tau_n \} (x, y_1.e_1; \ldots ; x, y_n.e_n) \rightarrow \langle \lambda (y_1 : \tau_1) e_1', \ldots ; \lambda (y_n : \tau_n) e_n' \rangle, \] wherein each $e'_i$ (for $1 \leq i \leq n$) is given by $$[\lambda^n \{ \tau_1; \ldots; \tau_n \}(x, y_1.e_1; \ldots; x, y_n.e_n)/x]e_i.$$ The foregoing unrolling dynamics “cheats” by substituting the $n$-ary abstraction for a variable, even though it is not a value. But it is tantamount to a value in that it evaluates to one (in one step). There is no harm in extending substitution to \textit{valuable} expressions (ones that have a value); it is the \textit{non-valuable} expressions, which may not terminate, that cause trouble. If you really wish to avoid this generalization, then you may instead regard the $n$-ary $\lambda$-abstraction to be a value of a tuple-of-functions type whose elimination form projects a function from the tuple and applies it to an argument. By deeming a tuple-of-$\lambda$’s as a value the minor cheat in the dynamics of unrolling is evaded. A specialized tuple-of-functions type is sometimes called an \textit{object type}, whose self-referential components are then called \textit{methods}. ### 13 Recursive Types Recursive types (with no positivity restriction) only make sense for a language with partiality, because they allow the definition of self-referential expressions that diverge when evaluated. Indeed, a single recursive type is sufficient to interpret the $\lambda$-calculus, the original universal model of computation. In the presence of partiality the distinction between eager and lazy evaluation is semantically significant, and the interplay between these strategies is central to the representation of inductive and coinductive types as recursive types. Recursive types are the means by which the concept of a data structure can be given full expression. Too often students are taught that data structures are always defined structurally, and may be depicted using “box and pointer” diagrams. But this excludes important structures involving laziness or functions, which are not representable in such a simplistic manner. For this reason alone it is useful to consider the general case, with the isomorphism between a recursive type and its unfolding playing the role of an “abstract pointer” that is more general than a memory address. Recursive types are essential for many programming concepts, most obviously the definition of a self-referential value such as a function defined in terms of itself. Less obviously, recursive types are crucial for understanding the (misnamed) concept of dynamic typing \textit{(q.v.)}, for the representation of coroutines using continuations, and for a full treatment of dynamic dispatch (again, via self-reference). Although deceptively simple in the formalism, recursive types are a deep and fundamental concept that should be given full treatment in any account of programming languages. As a technical matter, the statics for the fold operation of recursive types requires that \( \text{rec} \text{is} \tau \text{type} \), but for the premise to be properly formulated demands that \( [\text{rec} \text{is} \tau / t] \text{type} \). But if the former holds, then, by induction on the formation rules for types, it follows that \( t \text{type} \vdash \tau \text{type} \), from which the formation of the unrolling follows by substitution. Alternatively the formation rule for recursive type may be formulated using the hypothetical judgment \[ \text{rec} \text{is} \tau \text{type} \vdash [\text{rec} \text{is} \tau / t] \text{type}, \] from which the unconditional formation of the unrolling follows. The bound variable \( t \) in \( \text{rec} \text{is} \tau \) is not so much a variable, but a “back pointer” to the recursive type itself. The alternative formulation captures this interpretation more accurately than does requiring that \( t.\tau \) be a well-formed type operator. 14 Dynamic Typing The whole of Part IX is devoted to dynamic typing. The first point is that what is dynamic are not types, but rather classes, of data. The second, which is much the same, is that such languages are not untyped, but rather un-i-typed (to use Dana Scott’s neat turn of phrase), the one type in question being a recursive sum. Understanding this is essential to resolving the never-ending “debate” about typed vs. untyped languages. From this perspective a dynamic language is a static language that confines its attention to this one distinguished type. Yet in practice few dynamic languages adhere to this dogma. For example, such languages invariably admit multiple arguments to functions, and often admit multiple results as well. But this is none other than a concession to static typing! The domains and ranges of such functions are finite products, or perhaps lists, of the distinguished type, rather than single values of that type. So, multiple types are essential, but why stop with just these? Dynamic typing arose in Lisp long before there was any appreciation for the type structure of programming languages. The appeal of Lisp is powerful, especially if your only other experience is with imperative, especially object-oriented, programming languages. But Lisp’s charms are not often separated from their historical context. Though once true, it is true no longer that Lisp is the sole opponent of all other programming languages. And so it remains unrecognized that Lisp, and its descendants, are statically typed languages after all, despite appearances and vigorous protestation by advocates. After decades of experience, (recursive) sum types are unfamiliar, perhaps even unimagined, by the advocates of dynamic languages. Absent a proper understanding of recursive sums, dynamic typing seems appealing—precisely because it offers sums in the implicit form of run-time class checks. But dynamic typing robs the programmer of the all-important exhaustiveness check afforded by case analysis. Another reason for preferring dynamic languages is their immediacy. It is infamous that every Lisp program does something, as long as the parentheses match. It is consequently very easy to write code that does something vaguely right, and to adopt a development strategy that amounts to debugging a blank screen: get something sort-of going, then hack it into existence. Richly typed languages, on the other hand, impose the discipline that your code make minimal type sense, and demand that you consider the types of values in play. Opponents argue that the type system “gets in their way” but experience shows that this is hardly the case. Innovations such as polymorphic type inference make a mockery of the supposed inconveniences of a rich type discipline. 15 Subtyping Structural subtyping is, according to Pfenning (2008), based on the principle that a value should have the same types as its $\eta$-expansion. For example, the subtyping principle for function types may be justified by considering any expression $e$ of type $\tau_1 \rightarrow \tau_2$. Under the assumptions that $\tau_1 <: \tau'_1$ and $\tau_2 <: \tau'_2$, then $e$ should also have the type $\tau_1 \rightarrow \tau'_2$, because its expansion, $\lambda (x_1 : \tau_1) e(x_1)$, would, given those assumptions. Notice that if $e$ were a $\lambda$-abstraction, the expansion would not be needed to obtain the weaker typing, but when $e$ is a variable, for example, the expansion has more types than the variable as declared, so subtyping adds some flexibility. Similarly, one may justify the $n$-ary product subtyping principle on similar grounds. Given an $n$-tuple $e$, the $m$-tuple consisting of the first $m \leq n$ projections from $e$ provides a narrower view of it. The treatment of numeric types stretches the meaning of structural subtyping to its limits. The idea is that an integer $n$ may be “expanded” to the rational $n \div 1$, and a rational may be “expanded” to a real (say, a Cauchy se- quence) representing the same rational. Whether these count as $\eta$-expansions is questionable; it is more accurate to say that they are justified by canonical choices of inclusions that are implicit in the interpretation of subtyping. Whereas one may consider a dynamics in which values are identified with their $\eta$-expansions (so that a narrow tuple may, in fact, be wider than its type would indicate), it is less clear that this would be sensible for the numeric types—unless all numbers were to be represented as reals under the hood! But if that were the case, the inclusions would more naturally be interpreted as behavioral, rather than structural, subtypes. More precisely, the subtyping principles would better be read as isolating certain reals as rationals and certain rationals as integers according to a semantic, rather than syntactic, criterion. As such it is not always possible to determine whether a given real number is rational or integral by purely mechanical means; it is, rather, a matter of proof. In practice one gives syntactic conditions that suffice for practical situations, which is exactly what I have done in Chapter 25 in developing a syntactic behavioral type system to track the class of a value in DPCF. In truth one cannot determine the class of an arbitrary computation, but in practice one can often get by with some relatively simple syntactic conditions that provide a modicum of tracking ability. Conditionals invariably attenuate what can be statically tracked, and are especially problematic when the subtype relation lacks meets, as it does in many popular languages. 16 Dynamic Dispatch The discussion of dynamic dispatch is meant to address one of the more prominent aspects of object-oriented programming.\footnote{The (intentional) vagueness and fluidity of the topic impedes a definitive analysis. Its one stable characteristic is that its proponents insist that object-oriented programming is never what it, in fact, is. Being too profound for mortals, we'll have to defer a full treatment to the afterlife; certainly hell is object-oriented, but probably heaven is better off. Meanwhile, some remarks about the actual world may be helpful.} A great deal of emphasis is placed on the idea of an “object” as a collection of “methods” that act on shared “instance” data. Many a claim hinges on this supposedly distinguishing characteristic of object-oriented programming. For example, it is common to set up a false comparison to “abstract types”, which is said to conflict with object-oriented programming. Yet, as is shown in Chapter 26 dynamic dispatch is a particular use of data abstraction, thereby to which it cannot be opposed. The main point of Chapter 26 is to make clear that the “method-oriented” and “class-oriented” organizations of code are isomorphic, and hence interchangeable in all contexts. There is no distinction between the two organizations; they can be interchanged at will. The central idea, a version of which also appears in Abselson and Sussman’s Structure and Interpretation of Computer Programs, is what I call the dispatch matrix, which determines the behavior of each method on each class of instance. The dispatch matrix is symmetric in that it favors neither the rows nor the columns. One may, if one wishes, take a row-oriented or a column-oriented, view of the dispatch matrix, according to taste. One may also interchange one for the other at will, without loss or damage. There is nothing to the choice; it reflects the fundamental duality between sums and products, a duality that is equally present in matrix algebra itself. In particular, the isomorphism $$\prod_{c \in C} \prod_{d \in D} (\tau^c \to \rho_d) \cong \left( \sum_{c \in C} \tau^c \right) \to \left( \prod_{d \in D} \rho_d \right)$$ states that a dispatch matrix determines and is determined by a mapping from the “row space” of instances of the classes to the “column space” of behaviors of the methods, much as in linear algebra. [See also Harper (2018a).] 17 Symbols and References One of the innovations of PFPL is the consolidation of a number of seemingly disparate notions by breaking out the concept of symbols from use in the semantics of various language constructs. Symbols are used in their own right as atoms whose identity can be compared to a given atom. They are used as fluid-bound identifiers and as assignables by a finite mapping associating values to identifiers. They are used as dynamically generated classifiers to enforce confidentiality and integrity of data in a program. And they are used as channels for synchronized message-passing in concurrent programs. The commonality is simply this: symbols support open-ended indexing of families of related operators. For example, there is one get and set operator for each assignable, and new assignables may be allocated at will, implicitly giving rise to a new pair of get and set operators. The situation is similar for each of the language concepts just mentioned. Symbols are central to the uniform treatment of references throughout the book. Rather than being tied to mutation, the concept of a reference 17 appears in each of the applications of symbols mentioned above. In each case the primitive operations of a construct are indexed by statically known (explicitly given) symbols. For example, in the case of mutation, there are get and set operations for each assignable, and in the case of dynamic classification, the introductory form is indexed by the class name, and the elimination form is similarly indexed by a particular class. The purpose of references is to extend these operations to situations in which the relevant symbol is not statically apparent, but is instead computed at run-time. So, there is a getref operation that takes as argument a reference to an assignable; once the referent is known, the getref steps to the get for that assignable. Classes, channels, and so forth are handed similarly. References have nothing to do with mutation. Rather, references are a means of deferring the determination of the index of an operation until execution time. Thus, symbols are not values, but references to them are. The dynamics of constructs that involve symbols involves an explicit association of types to the active symbols whose interpretation is determined by the construct itself. To achieve this requires that the dynamics maintain the signature of active symbols, which in turn requires that some type information be made explicit in the language. This can lead to some technical complications, but overall it is preferable to omitting it, because the information cannot otherwise be recovered. Another reason to maintain the signature in the dynamics is that doing so facilitates the identification of states with processes in process calculus. For example, in the case of mutation, the signature specifies the active assignables, which are themselves modeled as processes executing concurrently with the main program, responding to get and set requests on their channel. Besides being conceptually appealing, using process notation facilitates a substructural formulation of the dynamics in which only the active parts of the state are mentioned explicitly, the remaining being “framed in” using the usual rules of process calculus. 18 Sorts of Symbols As discussed in Section 17 symbols in PFPL are used for open-ended indexing of families of operators. As new symbols are introduced, new instances of these families become available. For example, if symbols are being used as names of assignables, then the introduction of a new symbol, \footnote{For example, in the case of mutation, the type associated to an assignable is the type of its associated value.} a with associated type τ, gives rise to the expression &[a], a reference to the assignable a, which is a value of the type τ ref of assignable references. The behavior of the dereferencing operations, ∗e and e₁ ∗= e₂, is determined by the behavior of their underlying operations on assignables, @a and a := e. Similar concepts, such as Lisp-like symbolic data, dynamic classification, and communication channels, are handled by similar means. The uniform treatment of operator indexing using symbols consolidates a number of otherwise disparate, but closely related, concepts, by separating the treatment of symbols per se from their application as assignables, classes, channels, and so forth. A shortcoming of their formulation in PFPL, though, is that I have only allowed for one “sort” of symbols to be active at a time. This immediately becomes problematic when it is necessary to have, say, assignables and dynamic exceptions in play at the same time. Obviously not ever assignable should be construed as also being an exception name, nor every exception name as that of an assignable. What is required, in general, is to admit there to be several disjoint signatures in use at a time, each declaring a collection of active symbols and associating a type with each of them. Thus, in the case of assignables and exceptions, one would have a signature Σasgn of assignables and a signature Σexcn of exception names, each governing its own realm independently of the other. Alternatively, one could introduce a notion of species for symbols, and associate a species, as well as a type, to each symbol in the signature. In the example the species would be asgn for assignables and excn for exception names. To make this work the symbol allocation construct would have to be indexed by the sort of symbol being allocated so that it is added to the appropriate signature (or marked with the appropriate sort). This further suggests that symbol allocation should itself be broken out as a basic concept of syntax shared by all languages that make use of symbols. The issue of whether to consider scoped or scope-free allocation becomes one of the choice of structural congruence, the difference being the absence or presence, respectively, of Milner’s scope extrusion principle. It is up to the dynamics of a specific language to determine when, if ever, the scope of symbol may be exited. At the least one must demand that the transition outcome be independent of the allocated symbol for the exit to be sensible. Mobility requirements in the statics ensure that the scope of a declaration may be exited whenever it ought to be in the sense of the concept under consideration. 19 Exceptions My account of exceptions distinguishes the control mechanism from the data mechanism. In all languages that I know about these two aspects are not clearly separated, resulting in all manner of confusion. For example, most languages with exceptions have some form of “exception declaration” construct that introduces a mysterious thing called “an exception” that can be “thrown” or “raised” with an associated data value. The raised value can be “handled” by dispatching on the exception, recovering the associated value in the process. Some languages attempt to track the possibility of raising an exception in types, invariably with poor results. One reason for the failure of such methods is that it is not important to track what exceptions can be raised (which cannot be safely approximated), but rather what exceptions cannot be raised (which can be safely approximated). Worse, many programming methodology sources discourage, or even ban, the use of exceptions in programs, an egregious error arising from not understanding the role of secrecy in an exception mechanism. As regards the control aspect of exceptions, there is no difference between exceptions implemented as non-local control transfers and exceptions implemented using sums. Using sums the value of an exception is either a value of its intended type, or a value of the exception type that, presumably, provides some indication as to why a value of the intended type cannot be provided. The emphasis on the disjunctive meaning of sums is warranted. Many languages attempt to account for exceptional returns using absurdities such as the non-existent “null pointer” or similar devices whereby a certain range of returned values is to be regarded as having an entirely other meaning from the remaining values. Sums force the client to dispatch on whether the result is ordinary or exceptional, and there can be no ambiguity or confusion, nor can the distinction be elided. This is as it should be. Exceptions simply make this checking easier in cases where the “default” behavior is to propagate the exceptional value, rather than actually do anything with it, the handler being the locus of interpretation of the exceptional behavior. Admittedly most languages, astonishingly, lack sums, but this does not justify claiming that exceptions should also be ignored or repudiated! The main criticism of exceptions is the fallacious belief that one cannot tell where the exception is handled, the non-local transfer being supposedly out of the programmer’s control, or easily subverted accidentally or on purpose. Nonsense. Using dynamic classification one may ensure that the exception is a “shared secret” between the raiser and the handler that cannot be subverted. The raiser is responsible to ensure the integrity of the exception value (that it satisfies some agreed-upon requirement) and the handler is responsible to ensure the confidentiality (only it can decode the exception value, whose integrity may be assumed). Exceptions are shared secrets. More generally, confidentiality, and integrity, may be enforced using dynamic classification. Exceptions are dynamic classes that may mediate between the “raiser” and the “handler” of an exception. The introductory form of the class is provided only to those permitted to raise an exception of that class, and the eliminatory form only to those permitted to handle it. To all others the exception value is an indecipherable secret that passes through unscathed and uninterpreted. As this discussion makes clear, exception classes are not fluids, contrary to wide-spread belief. Although one may use fluid binding to implement the control aspect of exceptions, it has no bearing on their data aspect. 20 Modalities and Monads in Algol As an homage to the master, the name “Modernized Algol” (Chapter 34) is chosen to rhyme with “Idealized Algol,” Reynolds’s reformulation (Reynolds, 1981) of the original, itself often described as “a considerable improvement on most of its successors.” The main shortcoming of Algol, from a modern perspective, is that its expression language was rather impoverished in most respects, perhaps to achieve the a priori goal of being stack-implementable, a valid concern in 1960. On the other hand, its main longcoming, so to say, is that it made a modal distinction between expressions and commands, which was later to be much celebrated under the rubric of “monads.” Indeed, Reynolds’s formulation of Algol features the comm type, which in fact forms a monad of unit type. In Reynolds’s case the restriction to unit-typed commands was not problematic because there expressions are allowed to depend on the contents of the store. In private communication Reynolds himself made clear to me that, for him, this is the essential feature of the Algol language and of Hoare-style logic for it, whereas I, by contrast, regard this as a mistake—one that becomes especially acute in the presence of concurrency, for then multiple occurrences of the same “variable” (that is, assignable used an expression) raises questions of how often and when is the memory accessed during evaluation. The answer matters greatly, even if memory accesses are guaranteed to be atomic. Thus, MA differs from Reynolds’s IA in that assignables are not forms of expression, which means that expression evaluation is independent of the contents of memory (but not of its domain!). **MA** stresses another point, namely that the separation of commands from expressions is that of a *modality*, and not just that of a *monad* (see Pfenning and Davies (2001) for more on this topic.) The modality gives rise to a connective that has the structure of a monad, but this observation is posterior to the modal distinction on which it is constructed. Without it, there are only evalutable expressions with types of the form \( \text{cmd}(\tau) \), and nothing ever executes. Somewhere there has to be a command that executes an encapsulated command without itself being encapsulated, and it is there that the underlying modality is exposed. This shows up in the Haskell language as the device of “automatically” running an expression whose type happens to be that of the “IO monad.” This is achieved by using the derived form \[ \text{do } e \triangleq \text{bnd } x \leftarrow e; \text{ret } x, \] which is, of course, a command. Implicit in the discussion of Modernized Algol is the observation that the *Haskell language is but a dialect of Algol*. The main concepts of the Haskell language were already present in Algol in 1960, and further elucidated by Reynolds in the 1970’s. In particular so-called monadic separation of commands from expressions was present from the very beginning, as was the use of higher-order functions with a call-by-name evaluation order. The main advance of the Haskell language over Algol is the adoption of recursive sum types from ML, a powerful extension not contemplated in the original. Another significant advance is the elimination of assignables as forms of expression. The idea to make assignables act like mathematical variables was, in my opinion, an egregious historical error that is corrected in modern incarnations of the language. 21 Assignables, Variables, and Call-by-Reference The original sin of high-level languages is the confusion of assignables with variables, allowing one to write formulas such as \( a^2 + 2a + 1 \) even when \( a \) is an assignable.\(^3\) To make matters worse, most common languages don’t have variables at all, but instead press assignables into service in their stead. It all seems fine, until one considers the algebraic laws governing \(^3\)Indeed, the name “Fortran” for the venerable numerical computation language abbreviates the phrase “Formula Translator.” the formulas involved. Even setting aside issues of machine arithmetic, it is questionable whether the equation \( a^2 + 2a + 1 = (a + 1)^2 \) remains valid under the assignable-as-variable convention. For example, concurrency certainly threatens their equivalence, as would exceptions arising from machine arithmetic. The strict separation of variables and assignables in Modernized Algol avoids these complications by forcing access to assignables to be made explicit before any calculation can be done with their contents. It is a good source of exercises to re-formulate a few standard language concepts using this separation. For example, most imperative languages lack any notion of variable, and must therefore regard the arguments to a procedure or function as assignables. It is instructive to formulate this convention in Modernized Algol: the argument becomes a variable, as it should be, and the procedure body is surrounded by a declaration of an assignable whose contents is initialized to that variable, with the choice of assignable name being that given as the procedure parameter. Another exercise is to consider the age-old notion of call-by-reference: given that the argument is an assignable, one may consider restricting calls to provide an assignable (rather than an expression) of suitable type on which the body of the procedure acts directly. The assignable argument is renamed to be the call-site assignable for the duration of the call. From this point of view the standard concept of call-by-reference might better be termed call-by-renaming: it is more accurate and it even sounds similar! To be more precise, consider a type of call-by-renaming procedures, written \( \tau_1 \Rightarrow \tau_2 \) whose values are procedures \( \rho(\tau)(a.m) \), where \( a \) is an assignable symbol scoped within the command \( m \). Absent call-by-renaming, such a procedure could be considered to be short-hand for the function \[ \lambda(x: \tau) \, \text{cmd}(\text{dcl}\, a := x \text{ in } m), \] where \( x \) is a fresh variable, which allocates an assignable initialized to the argument for use within the body of the procedure. With call-by-renaming one must instead regard these procedures as a primitive notion, restrict calls to provide an assignable as argument, and define the action of a call as follows: \[ (\rho(\tau)(a.m))(b) \mapsto [a \leftrightarrow b] \, m, \] where \( a \) is chose (by renaming of bound symbols) to be fresh. The assignable \( b \) must be within scope at the call site, and is provided to the body as argument. Notice that within the command \( m \) the assignable \( a \) is treated as such; it is not regarded as a reference, but rather accessed directly by @\( a \) and \( a := e \), as usual. The renaming on call ensures that these commands act on the passed assignable directly, without indirection. 22 Assignable References and Mutable Objects In Section 17 the concept of a *reference* is completely separable from that of an assignable. A reference is a means of turning a symbol—of whatever sort—into a value of reference type. It is a level of indirection in the sense that the operations acting on references simply extract the underlying symbol and perform the corresponding operation for that symbol. The available operations on references depend on the sort of the symbol. In the case of assignables the operations include `getref` and `setref`, which code for the `get` and `set` operations associated with the underlying assignable. One may also consider the references admit a *equality test* that determines whether or not they refer to the same underlying symbol. Such a test is definable by simply changing the type of the stored value from $\tau$ to $\tau_{opt}$, maintaining the invariant that the underlying value is always of the form $\text{just}(\cdot)$, except during an equality test. To check equality of references, simply save the contents of one reference, then set it to `null`, and check whether the contents of the other is also `null`. If so, they are the same reference, otherwise they are not; be careful to restore the changed value before returning! Assignable references generalize naturally to *mutable objects*. Think of a mutable object as a collection of assignables (its *instance data*) declared for use within a tuple of functions (its *methods*) acting on those assignables. A reference is a single assignable, say `contents`, that is accessible only to the `getref`, `setref`, and `eq` methods just described. More generally, a mutable object can have any number of private assignables governed by any number of methods that act on them. Thus, assignable references are a special case of mutable objects. One may say informally that “mutable objects are references” or speak of “references to mutable objects”, but in fact the concept of a mutable object is self-standing and generalizes that of an assignable reference. There is no need to speak of “object references” or “references to objects”, because objects are themselves (generalized) references. 23 Types and Processes The treatment of concurrency is divided into two parts: Chapter 39 introduces abstract process calculus, and Chapter 40 incorporates concurrency into Modernized Algol. The central theme is that concurrency is entirely a --- \[4\] For this one needs free assignables, of course, otherwise the tuple cannot be returned from the scope of the declaration. matter of indeterminacy. Although this is a commonly accepted idea, the usual treatments of concurrency involve quite a lot more than just that. Why is that? In Chapter 39 I develop Milner-style process calculus, starting with a simple synchronization calculus based on signals. Two processes that, respectively and simultaneously, assert and query a signal can synchronize on their complementary actions to take a silent action, a true computation step. The formulation follows Milner in using labeled transition systems to express both the steps of computation as silent actions, and the willingness to assert or query a signal as polarized transitions labeled by that action. Contrary to Milner, instead of having “names” and “co-names”, I have two polarities of actions indexed by signal names. Another difference from Milner is that I draw a distinction between processes and events. An event is a sum of atomic events, which are continuations conditioned on an action (assert or query a signal). A process is a parallel composition of atomic processes, the most important of which is synchronization on an event. Making the separation avoids condurndums in Milner’s formalism, such as intermixing of choice and parallel composition, that need not be considered in a programming language context. In effect all choices are causal in that they are mediated by the possibility of taking an action. (I suppose it would be possible to admit spontaneous events, but I have not found a need for it within the context of the book.) One may then discuss the declaration of new signals, but no interesting issues arise until message passing is introduced. So the next order of business is to generalize signals to channels, which carry data, and which break the symmetry between assertion and query of signals. Now channels have a sending and a receiving side, with the sender providing the data and the receiver obtaining it. Channels may be synchronous or asynchronous, according to whether the sender is blocked until a receiver receives the message. The asynchronous form is more general, at least in the presence of channel references, for one can implement a receipt notification protocol corresponding to what is provided by synchronous send. In the π-calculus the only messages that can be sent along a channel are other channels (or finite sequences of them in the so-called polyadic case). Instead, I choose to examine the question of the types of messages from the outset, arriving at the π-calculus convention as a special case involving recursive and reference types. When channels are declared, the type of --- 5Milner called these \(\tau\)-transitions; I call them \(\epsilon\)-transitions, as they are called in automata theory. their data values must be specified, and remains fixed for the lifetime of the channel. There is no loss of generality in insisting on homogeneous channels, the heterogeneous case being handled using sums, which consolidate messages of disparate types into a single type with multiple classes of values. To mimic the $\pi$-calculus, I consider a type of \textit{channel references}, which are analogous to \textit{assignable references} as they arise in MA. The process calculus notion of \textit{scope extrusion} captures the distinction between scoped and (scope-)free channels found in MA. The final step in the development of process calculus is the realization that \textit{channels are nothing but dynamic classes} in the sense of Chapter 33. There is only one shared communication medium (the “ether” if you will) on which one communicates dynamically classified values. The classifiers identify the “channel” on which the message is sent, and its associated data constitutes the data passed on that “channel.” By controlling which processes have access to the constructor and destructor for a class one can enforce the integrity and confidentiality, respectively, of a message. \textit{None of this has anything to do with concurrency}. Indeed, the entire matter is but one application of dynamic classification. With that in hand, all that is left of process calculus is parallel composition, which is indeterminacy of evaluation, and the transmission of messages on the medium. Chapter 40 is devoted to the integration of these ideas into a programming language, called Concurrent Algol (CA), an imperative language with a modal distinction between expressions and commands. Because assignables are definable using processes, the commands are entirely concerned with dynamic classification and synchronization. Two formulations of synchronization, \textit{non-selective} and \textit{selective}, are considered. Non-selective communication resembles the behavior of an ethernet transceiver: packets are drawn from the ether, and are passed to the receiving process to dispatch on their channels and payloads. A typical, but by no means forced, pattern of communication is to handle packets that one recognizes, and to re-emit those that one does not, exactly in the manner of the aforementioned transceiver. Selective communication integrates dynamic class matching so that the only packets received are those with a known channel, and correspondingly known type of payload. Busy-waiting is thereby avoided. The formulation and proof of type safety for the concurrent language CA in, to my knowledge, novel. The type system is structural; it makes no attempt to ensure the absence of deadlock, nor should it.\footnote{This would be a good use for type refinements, which are discussed in Chapter 25. Deadlock is a matter of program behavior, not program structure.} Consequently, one must formulate the progress property carefully to allow for the possibility of a process that is willing to communicate, but is unable to do so for lack of a matching partner process. This is very neatly stated using labeled transitions: a well-typed process that is not the terminal process is always capable of undertaking an action. Spelled out, a well-typed, non-terminal process may either take a step of computation (including a synchronization), or be capable of undertaking an action (either a send or a receive). Theorem 40.3, which is paradigmatic for any concurrent programming language, concisely summarizes the desired safety property using labeled transitions. 24 Type Classes The concept of a type class is often associated with the Haskell language, but the concept was introduced by David MacQueen in his design of modules for ML (see Milner et al. (1997) for the history.) As is explained in Chapter 44, a type class is a descriptive signature, one that merely specifies the availability of certain types and operations in a structure (module), whereas an abstraction is prescriptive in that it specifies exactly what is to be visible in a structure. The term “type class” in the Haskell language refers not only to the descriptive form of signature described in Chapter 44, but also to a means of automatically deriving an instance of it based on a global context of declarations. Instance declarations are simply functor declarations that build instances of a target class from instances of the classes on which it depends. The automatic instantiation mechanism composes such functors to obtain instances of a type class based on circumstantial evidence. The techniques used rely on such suppositions as there being at most one linear ordering for a given type, which is patently false and a common source of trouble. Dreyer et al. (2007) separates the declaration of an instance from its use in a particular situation, which avoids unnecessary suppositions and supports the smooth integration of the module mechanisms described in the text. References
{"Source-Url": "http://www.cs.cmu.edu/~rwh/pfpl/supplements/commentary.pdf", "len_cl100k_base": 13972, "olmocr-version": "0.1.53", "pdf-total-pages": 29, "total-fallback-pages": 0, "total-input-tokens": 69375, "total-output-tokens": 15988, "length": "2e13", "weborganizer": {"__label__adult": 0.000453948974609375, "__label__art_design": 0.00044655799865722656, "__label__crime_law": 0.00022232532501220703, "__label__education_jobs": 0.0014247894287109375, "__label__entertainment": 0.00010782480239868164, "__label__fashion_beauty": 0.00017130374908447266, "__label__finance_business": 0.00017654895782470703, "__label__food_dining": 0.0005016326904296875, "__label__games": 0.0005803108215332031, "__label__hardware": 0.0006632804870605469, "__label__health": 0.0004453659057617187, "__label__history": 0.0003573894500732422, "__label__home_hobbies": 0.00011646747589111328, "__label__industrial": 0.0004088878631591797, "__label__literature": 0.0012874603271484375, "__label__politics": 0.0002765655517578125, "__label__religion": 0.0007824897766113281, "__label__science_tech": 0.013427734375, "__label__social_life": 0.00014090538024902344, "__label__software": 0.0037784576416015625, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00037479400634765625, "__label__transportation": 0.0006952285766601562, "__label__travel": 0.00022470951080322263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68254, 0.0268]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68254, 0.64493]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68254, 0.92692]], "google_gemma-3-12b-it_contains_pii": [[0, 721, false], [721, 1655, null], [1655, 3943, null], [3943, 6873, null], [6873, 9420, null], [9420, 11763, null], [11763, 14487, null], [14487, 17004, null], [17004, 19305, null], [19305, 21747, null], [21747, 24281, null], [24281, 26805, null], [26805, 29318, null], [29318, 31913, null], [31913, 34615, null], [34615, 37288, null], [37288, 39773, null], [39773, 42363, null], [42363, 45041, null], [45041, 47772, null], [47772, 50321, null], [50321, 52833, null], [52833, 55706, null], [55706, 58287, null], [58287, 61034, null], [61034, 63926, null], [63926, 66199, null], [66199, 68100, null], [68100, 68254, null]], "google_gemma-3-12b-it_is_public_document": [[0, 721, true], [721, 1655, null], [1655, 3943, null], [3943, 6873, null], [6873, 9420, null], [9420, 11763, null], [11763, 14487, null], [14487, 17004, null], [17004, 19305, null], [19305, 21747, null], [21747, 24281, null], [24281, 26805, null], [26805, 29318, null], [29318, 31913, null], [31913, 34615, null], [34615, 37288, null], [37288, 39773, null], [39773, 42363, null], [42363, 45041, null], [45041, 47772, null], [47772, 50321, null], [50321, 52833, null], [52833, 55706, null], [55706, 58287, null], [58287, 61034, null], [61034, 63926, null], [63926, 66199, null], [66199, 68100, null], [68100, 68254, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68254, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68254, null]], "pdf_page_numbers": [[0, 721, 1], [721, 1655, 2], [1655, 3943, 3], [3943, 6873, 4], [6873, 9420, 5], [9420, 11763, 6], [11763, 14487, 7], [14487, 17004, 8], [17004, 19305, 9], [19305, 21747, 10], [21747, 24281, 11], [24281, 26805, 12], [26805, 29318, 13], [29318, 31913, 14], [31913, 34615, 15], [34615, 37288, 16], [37288, 39773, 17], [39773, 42363, 18], [42363, 45041, 19], [45041, 47772, 20], [47772, 50321, 21], [50321, 52833, 22], [52833, 55706, 23], [55706, 58287, 24], [58287, 61034, 25], [61034, 63926, 26], [63926, 66199, 27], [66199, 68100, 28], [68100, 68254, 29]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68254, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0612381f2b7b9ac1743d37cc822ff77bf5a39ed7
Agent-based approach for manufacturing integration: The ciimplex experience Y. Peng, T. Finin, Y. Labrou, R.S. Cost, B. Chu, J. Long, W. J. Tolone & A. Boughannam To link to this article: https://doi.org/10.1080/088395199117487 Published online: 26 Nov 2010. AGENT-BASED APPROACH FOR MANUFACTURING INTEGRATION: THE CIIMPLEX EXPERIENCE Y. PENG, T. FININ, Y. LABROU, and R. S. COST Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, Maryland, USA B. CHU, J. LONG, and W. J. TOLONE Department of Computer Science, University of North Carolina, Charlotte, North Carolina, USA A. BOUGHANNAM IBM Corporation, Boca Raton, Florida, USA The production management system used by most manufacturers today consists of disconnected planning and execution processes and lacks the support for interoperability and collaboration needed for enterprise-wide integration. This situation often prevents the manufacturer from fully exploring market opportunities in a timely fashion. To address this problem, we are exploring an agent-based approach to intelligent enterprise integration. In this approach, a set of agents with specialized expertise can be quickly assembled to help with the gathering of relevant information and knowledge, to cooperate with each other and with other parts of the production management system and humans to arrive at timely decisions in dealing with various enterprise scenarios. The proposed multiagent system, including its architecture and implementation, is presented and demonstrated through an example integration scenario involving real planning and execution software systems. The production management system used by most of today’s manufacturers consists of a set of separate application softwares, each for a different part of the planning, scheduling, and execution processes (Vollmann et al., 1992).* For example, Capacity Analysis (CA) software determines a master production schedule that sets long-term production targets. Enterprise Resource Planning (ERP) software generates material and resource plans. Scheduling software determines the sequence in which shop floor resources (people, machines, material, etc.) are used in producing different products. The Manufacturing Execution System (MES) tracks real-time status of work in progress, enforces routing integrity, and reports labor/material claims. Most of these P/E applications are legacy systems developed over years. Although these software systems perform well for designated tasks, they are not equipped to handle complex business scenarios (Bermudez, 1996; Jennings et al., 1996; Tennenbaum et al., 1993). Typically, such scenarios involve coordination of several P/E applications to respond to external environmental changes (price fluctuations, changes of requests from customers and suppliers, etc.) and internal execution dynamics within an enterprise (resource changes, mismatches between plan and execution, etc.). Timely solutions to these scenarios are crucial to agile manufacturing, especially in the era of globalization, automation, and telecommunication (Dourish & Bellotti, 1992). Unfortunately, these scenarios are primarily handled by human managers, and the responses are often slow and less than optimal. The Consortium for Intelligent Integrated Manufacturing Planning-Execution (CIIMPLEX), consisting of several private companies and universities, was formed in 1995 with matching funds from the National Institute of Standards and Technology. The primary goal of the consortium is to develop technologies for intelligent enterprise-wide integration of planning and execution for manufacturing (Chu et al., 1996). Our vision of a successful integrated P/E system for manufacturing has the following features. 1. Interoperability: heterogeneous P/E applications from different vendors are able to operate together as integrated parts of the system. 2. Integration: software tools and infrastructures to support integration tasks not covered by existing P/E applications are provided. In particular, the integrated solution should support runtime dynamic coordination in dealing with unexpected events. 3. Distributed: resources such as software and data are allowed to be physically or logically distributed. 4. Openness: the user shall be able to select and change different applications and tools easily and with little additional integration cost. One approach to an integrated P/E system might be to rewrite all application software into a monolithic integrated P/E system capable of handling all foreseeable scenarios. This approach is judged to be unfeasible because of high development and maintenance cost, and the closedness and inflexibility of such monolithic systems (Hammer, 1996). Conventional object-oriented distributed systems also seem inadequate because they work at the level of objects and thus lack the support for abstraction at higher levels (Jennings & Wooldridge, 1998). Instead, CIIMPLEX has adopted as one of its key technologies the approach of intelligent software agents and is developing a multiagent system (MAS) for enterprise integration. In sharp contrast to traditional software programs, software agents are programs that help people solve problems by collaborating with other software agents and other resources in the network (Bradshaw et al., 1998; Compositional Research Group, 1996; Jennings & Wooldridge, 1998; Nwana, 1996; Parunak et al., 1997). For instance, individual agents can be designed to perform data collection and analysis of plans and schedules and to keep constant vigil against mismatches among these plans and schedules at different levels of abstraction and time horizons. Other agents can be designed to resolve the conflicts either by themselves or in coordination with human managers and analysts. Personal assistant agents can be designed to assist human managers/analysts. Still other agents can be created to provide legacy systems with better communication and coordination capabilities so they can more effectively cooperate with each other and with other agents. Moreover, MAS, as a society of autonomous agents, is inherently open and distributed, and interagent communication capability provides the essential means for agent collaboration. The environment of the CIIMPLEX consortium is different from that of academic-oriented research laboratories. Most of the companies in the consortium are P/E application system vendors and users. This situation gives us the opportunity to work with real-world P/E systems (rather than imagined toy problems) and appreciate the complexity of realistic business scenarios. On the other hand, the agent-base approach, as a relatively immature technology that has not yet reached industrial strength, understandably receives only guarded enthusiasm by some members in the consortium. They are more concerned with integration of the P/E systems, using more mature technologies, to better handle normal or expected business scenarios. Our immediate priority is thus not to design and develop a complete agent system that integrates all aspects of manufacturing planning and execution but to develop one that is limited in scope but reliable and scalable and clearly adds commercial value to the end user. In addition, the initial prototype agent systems must have minimum interference with the normal work of existing P/E applications. Based on these considerations, we have decided to concentrate on those P/E scenarios that represent exceptions to the normal or expected business processes and whose resolution involves several P/E applications. For example, consider the scenario involving a delay of the shipment date on a purchased part from a supplier. This event may cause one of the following possible actions: the manufacturing plan is still feasible (no action is required); order substitute parts; reschedule; or reallocate available material. To detect this exception and determine which of these actions to take, different applications (e.g., MES, ERP, CA, and Scheduler) and possibly human decision makers must be involved. Examples of similar scenarios include a favored customer's request to move ahead the delivery date for one of its orders, a machine breakdown being reported by MES, a crucial operation having its processing rate decreased from the normal rate, to mention just a few. Figure 1 illustrates at a conceptual level how an exception (e.g., a shipment of a purchased part is delayed) should be handled by an integrated system. The decision module decides, with the instruction from a human or an analysis module, what constitutes an exception. The monitoring module determines what data are to be monitored to detect such exceptions and conducts actual monitoring of the data stream. When notified by the monitoring module of the occurrence of an exception, the decision module makes appropriate decisions in consultation with other P/E applications and the analysis module. The decision (e.g., a request to reschedule) will then be carried out by the designated P/E application(s). Note that what constitutes an exception and how to monitor it is a dynamic decision that cannot be FIGURE 1. Manufacturing integration example: handling exceptions. Agents for Manufacturing Integration specified prior to the plan execution. For example, a factory may not normally consider a delay of shipment of an ordered part exceptional unless the delay is greater than five days. However, if a part is crucial for an order of a preferred customer or the inventory of a part is below a threshold, then a delay of greater than three days may become an exception. To make the situation more complicated, an action taken to address one exception may trigger another exception to occur (e.g., a reschedule to handle the delay of shipment of one part may delay the delivery date of an important order for which the respective sales representative needs to be notified). To provide an integrated solution to the above outlined scenarios, simple as they are, is by no means a trivial undertaking. First, a set of agents of specialized expertise needs to be developed to provide functions, such as those performed by the analysis module, decision module, and monitoring module in Figure 1, which are not covered by any of the existing P/E applications. As integration tasks, these functions fall in the “white space” between these P/E applications. Second, a reliable and flexible interagent communication infrastructure needs to be developed to allow agents to effectively share information, knowledge, and services. Third, some means to support interaction with P/E applications, which will not be agentified at this stage, needs to be provided. And finally, a mechanism for the runtime collaboration of all these pieces also needs to be developed. In this article, we describe our experience of developing an agent-based system for the CIIMPLEX project. The rest of this article is organized as follows. In the next section, we briefly describe the software agent technologies that are relevant to the task of manufacturing integration. The two subsequent sections form the core of this article, where we first present the proposed multiagent system architecture and then its work through an example scenario that involves real P/E applications. The final sections evaluate our approach and compare it with other, related work and conclude with discussions of ongoing work to expand the agent system, along with the directions for future research. MULTIAGENT SYSTEM AND AGENT COLLABORATION There is currently no consensus on the definition of software agents or of agency, and some people go so far as to suggest that any piece of software or object that can perform a specific given task is an agent. However, the prevailing opinion is that an agent may exhibit, to varying extents, three important general characteristics: autonomy, adaptation, and cooperation (Genesereth & Katchpel, 1994; Nwana, 1996). By “autonomy,” we mean that agents have their own agenda of goals and exhibit goal-directed behavior. They are not simply reactive but can be proactive and take initiatives, as they deem appropriate. Adaptation implies that agents are capable of adapting to the environment, which includes other agents and human users, and can learn from the experience in order to improve themselves in a changing environment. Cooperation and coordination among agents is probably the most important feature of MASs (Nwana, 1996). Unlike those standalone agents, agents in many MASs share information, knowledge, and tasks among themselves and collaborate with each other to achieve common goals. The intelligent properties of a MAS are not only reflected by the expertise of individual agents but also by the emergent behavior of the entire collection. Cooperation and coordination of agents in a MAS requires agents to be able to understand each other and to communicate with each other effectively. The infrastructure that supports agent cooperation in a MAS is thus seen to include at least the following key components: - a common agent communication language (ACL) and protocol, - a common format for the content of communication, and - a shared ontology. In CIIMPLEX, we take the Knowledge-Sharing-Effort (KSE) approach toward achieving the infrastructure needed for agent cooperation (Patil et al., 1992). Three component technologies developed by the KSE are adopted. They are Knowledge Query Manipulation Language (KQML) as a communication language and protocol, Knowledge Interchange Format (KIF) as the format of the communication content, and the concept of a shared ontology. In what follows, we briefly describe these three components and justify their selections in the context of a manufacturing integration environment. **KQML** KQML is a message-based ACL (Finin et al., 1993, 1998), and it considers that each message not only contains the content but also the attitude or “intention” the sender has for that content. For instance, consider that agent $A$ sends the following statement as the content of a message to agent $B$: > the processing rate of operation 1 at machine X is greater than 5. Agent $A$, in different circumstances, may have different intentions about this statement. Agent $A$ may simply *tell* $B$ that this statement is true in its own database, or *ask if* this statement is true in $B$’s database; or attaches some other intention to this statement. KQML provides a formal specification for representing the intentions of messages through a set of predefined performatives used in the messages. A subset of KQML performatives that is particularly relevant to our agent system includes ask-one, tell, advertise, subscribe, recommend-one, error, sorry, etc. A KQML message is thus divided into three layers: the content layer, the communication layer, and the message layer. The content layer bears the actual content of the message in a language chosen by the sending agent. The communication layer encodes a set of features to the message to describe the lower-level communication parameters, such as the identity of the sender and recipient, and a unique identifier tag associated with the communication. The message layer encodes the message, including its intention (by a chosen performative), the content language used, and the ontology. The structured syntax of KQML messages is based on a balanced parentheses list whose first element is the performative and the rest are the parameters in the form of keyword/value pairs. The following is an example of an actual KQML message sent by agent “joe” to agent “stock-server,” inquiring about the price of a share of IBM stock where ?x is an uninstan- tiated variable. The reader is referred to Finin et al. (1993) for a detailed description of the KQML specifications. (ask-one :language KIF :content (Price IBM ?x) :sender joe :receiver stock-server :reply-with ⟨ a unique string as the tag of this message⟩ KIF Although KQML allows agents to choose their own content language, it is beneficial for all agents within one MAS to exchange most if not all of their messages in a single neutral format. One obvious advantage of adopting a common content format is efficiency. Instead of many-to-many format conversion, each agent only needs to convert the content of the message between its own internal representation and the common format. KIF, because of its rich expressive power and simple syntax, is probably the most widely used neutral message content format for agent communication. KIF is a prefix version of First Order Predicates Calculus (FOPC) with extensions to support nonmonotonic reasoning and definitions (Genesereth et al., 1992). The language description includes specifications for both its syntax and its semantics. Besides FOPC expressions of facts and knowledge, KIF also supports extralogical expressions such as those for the encoding of knowledge about knowledge and of procedures. KIF is currently the subject of an American National Standard Institute (ANSI) standardization study. Shared Ontology Sharing the content of formally represented knowledge requires more than a formalism (such as KIF) and a communication language (such as KQML). Individual agents, as autonomous entities specialized for some particular aspects of problem solving in a MAS, may have different models of the world in which objects, classes, and properties of objects of the world may be conceptualized differently. For example, the same object may be named differently ("machine-id" and "machine-name" for machine identification in the databases of two agents). The same term may have different definitions in different agents’ internal representations ("salary-rate," referring to hourly rate in one agent and annual rate in another). Also, different taxonomies may be conceptualized from different perspectives by individual agents. Therefore, to ensure correct mutual understanding of the exchanged messages, agents must also agree on the model of the world or at least that part of the world about which they are exchanging information with each other. In the terminology of the agent community, agents must share a common ontology (Patil et al., 1992). An ontology for a domain is a conceptualization of the world in terms of the objects, qualities, distinctions, and relationships, etc., in that domain. A shared or common ontology refers to an explicit specification of the ontological commitments of a group of agents. Such a specification should be an objective (i.e., interpretable outside of the agents) description of the concepts and relationships that the agents use to interact with each other, with other programs such as legacy business applications, and with humans. A shared ontology can be in the form of a document or a set of machine interpretable specifications. Agent Collaboration With a common communication language, a common content language, and a shared ontology, agents can communicate with each other in the same manner, in the same syntax, and with the same understanding of the world. In addition to these three essential ingredients, some common service agents are often used in a MAS to make agent collaboration more efficient and effective. One type of a service agent is the Agent Name Server (ANS). The ANS, similar to the White Pages phone book, serves as the central repository of the contact addresses for all involved agents, i.e., it maintains an address table of all registered agents, accessible through the agents’ symbolic names. Newly created agents must register with the ANS their names, contact addresses, and possibly other information by sending to the ANS a message with the performative register. (As a presumption, every agent in the system must know how to contact the ANS). The ANS maps the symbolic name of a registered agent to its contact address when requested by other agents. Another type of a service agent is the Facilitator Agent (FA), which provides additional services to other agents. A simple FA is a Broker Agent (BA), which provides a “Yellow Pages” type service. It registers services offered and requested by individual agents and dynamically connects available services to requests whenever possible. Agents register their available services by sending BA messages with the performative advertise and request services by sending to the BA messages with brokering performatives such as recommend-one. In both cases, the description of the specific service is in the content of the message. In a reply to a recommend-one message, the BA will send the symbolic name of an agent that has advertised as being able to provide the requested service at the BA, or sorry if such request cannot be met by current advertisers. CIIMPLEX AGENT SYSTEM ARCHITECTURE In this section, we describe the MAS architecture that supports inter-agent cooperation in the CIIMPLEX project, emphasizing the agent communication infrastructure. Figure 2 gives the architecture of CIIMPLEX enterprise integration with MAS as an integrated part. At the current stage of the project, the entire P/E integration architecture is composed of two parts, the P/E application world and the agent world, and is supported by two separate communication infrastructures. Although these legacy P/E applications have not been agentified, they have been wrapped with APIs, which provide them with limited communication capability (Chu et al., 1998). Different transport mechanisms (e.g., MQ Series of IBM and VisualFlow of Envisionit) are under experimentation as communication infrastructures for the wrapped P/E applications. These mechanisms are persistent but only support static, predetermined communication patterns. In the agent world, besides the service agents ANS and BA, several other types of agents are useful for enterprise integration. For example, datamining/parameter-estimation agents are needed to collect, aggregate, interpolate, and extrapolate the raw transaction data of the low-level (shop floor) activities, and then to make this aggregated information available for higher level analyses by other agents. Monitoring agents monitor, detect, and notify about abnormal events that need to be attended. The CIIMPLEX Analysis Agents (CAA) evaluate disturbances to the current planned schedule and recommend appropriate actions to address each disturbance. The Scenario Coordination Agents (SCA) assist human decision making for specific business scenarios by providing the relevant context, including filtered information, actions, as well as workflow charts. All these agents use the KQML as the agent communication language and use a subset of KIF that supports Horn clause deductive inference as the content language. TCP/IP is chosen as the low-level transport mechanism for agent-to-agent communication. The shared ontology is an agreement document established by the P/E application vendors and users and other partners in the consortium. The agreement adopts the format of the Business Object Document (BOD) defined by the Open Application Group (OAG). BOD is also used as the message format for communication among P/E applications such as MES and ERP, and between agents and applications. A special service agent, called the Gateway Agent (GA), is created to provide interface between the agent world and the application world. GA's functions include making connections between the transport mechanisms (e.g., between TCP/IP and MQ Series) and converting messages between the two different formats (KQML/KIF and BOD). The agent system architecture outlined above is supported by the agent communication infrastructure called JACKAL developed by the consortium (Cost et al., 1998). The acronym JACKAL indicates that JACKAL is written in JAVA to support agent communication using the KQML agent communication language. The decision to select JAVA as the implementation language was based mainly on its interplatform portability, its networking facilities, and its support for multithread programming. The next two subsections provide a detailed description of JACKAL. **Conversation Policies in JACKAL** KQML itself only defines the syntax of the language. However, a good, workable semantics is imperative for conducting coherent conversation among agents. To support both syntactic and semantic aspects of the language, JACKAL takes a semantic interpretation of KQML for Labrou (1996) and Labrou and Finin (1997) and realizes part of it as a set of conversation policies. The conversation policies are procedures that, based on the performatives involved, specify how a conversation (consisting of a sequence of messages) is to start, to proceed, and to terminate. For example, a conversation started with an ask-one message will terminate as soon as the sender receives a proper reply. (Possible replies include an error message, indicating that the format of the message is incorrect; a sorry message, indicating that the receiver cannot provide an answer to the question; a tell message, whose content contains an answer to the given question; or an untell message, which retracts a previous tell message). A conversion started by a message with performative subscribe would have a different policy. When agent A starts such a conversation with agent B, the conversation remains open, with A continuing to listen for new messages from B that satisfy the subscription criterion. Conversion policies chosen for JACKAL can be described using a Deterministic Finite Automata (DFA) model. In this model, each conversation starts with a state called START and ends with a state called STOP. A conversation moves from one state to another according to the given state transition diagram. Figure 3 shows examples of the DFAs for ask-one and subscribe conversations. ### Overview of JACKAL’s Design JACKAL was designed to provide comprehensive functionality, while presenting a simple interface to the user. Thus, although JACKAL consists of roughly seventy distinct classes, all user interactions are channeled through one class, hiding most details of the implementation. Figure 4 presents the principal JACKAL components and the basic message path through the system. We will first discuss each of the components and then, to illustrate their interaction, trace the path of a message through the system. ### JACKAL Components The Intercom class is the bridge between the agent application and JACKAL. It controls startup and shutdown of JACKAL, provides the application with access to internal methods, houses common data structures, and plays a supervisory role to the communications infrastructure. Agents create and use an instance of Intercom rather than extending an agent shell. This gives us great flexibility for the design and implementation of other parts of the agent. JACKAL runs a Transport Module for each communication protocol it uses. The current JACKAL implementation includes a module for TCP/IP, Agents for Manufacturing Integration FIGURE 4. Overview of JACKAL. and users can create additional modules for other protocols. The Transport Module is responsible for message transmission and receipt. Messages received by the Switchboard must be directed to the appropriate places in the Conversation Space. This is the role of the Message Handler. Messages are associated with current (logical) threads based on their ID (the value of the “reply-with” or “in-reply-to” field). This directs their assignment to ongoing conversations when possible. If no such assignment can be made, a new conversation appropriate to the message is started. Conversation protocols, described above, are “run” as independent threads for all ongoing conversations. This allows for easy context management, while providing constraints on language use and a framework for low-level conversation management. The Distributor is a Linda-like (Carriero & Gelertner, 1989) shared memory space (conceptually like a blackboard), which serves to match messages with requests for messages. This is the sole interface between the agent and the message traffic. Its concise API allows for comprehensive specification of message requests. Requesters are returned message queues and receive all return traffic through these queues. Requests for messages are based on some combination of message, conversation or thread ID, and syntactic form. They permit actions such as removing an acquired message from the blackboard or marking it as read-only. A priority setting determines the order of matching. Requests can be set to persist indefinitely or to terminate after a certain number of matches. A service here is any thread; this could be a JACKAL service or threads within the agent itself. The only thing that distinguishes among threads is the request priority they use. System, or JACKAL, threads choose from a set of higher priorities than agent threads, but each chooses a level within its own pool. JACKAL reserves the highest and lowest priorities for services directing messages out of the agent and for those cleaning the blackboard, respectively. The Switchboard acts as an interface between the Transport Modules and the rest of JACKAL. It must facilitate the intake of new messages, which it gathers from the Transport Modules, and carry out send requests from the application. The latter is a fairly complicated procedure, since it has multiple protocols at its disposal. The Switchboard must formulate a plan for the delivery of a message, with the aid of the Address Cache, and pursue it for an unspecified period of time, without creating a bottleneck to message traffic. The problem of agent naming arises in any multiagent system. A naming scheme, called KNS is developed in JACKAL. KNS is a hierarchical scheme similar in spirit to DNS. A fully qualified agent name (FQAN) is a sequence of period-separated agent names (e.g., phil.cs.edu). The sequence denotes registrations between agents. Agents can register together to form teams and can maintain multiple identities to represent roles in the multiagent system. Multiple registrations for an agent become a network of aliases for that agent. If one name becomes inaccessible, another can be identified to fill the gap. Moreover, since agents can maintain multiple contact information for each name, agents can change locations and leave forwarding arrangements for messages while they migrate. In this way, dynamic group formation is supported. KNS can easily be extended to support collaborating, mobile KQML-speaking agents using a variety of transport protocols. In such case, the final segment in a FQAN is always a URL (e.g., phil.cs.http://www.umbc.edu), providing unique, static location information for the base of an agent registration chain. The Address Cache holds agent addresses in order to defray look-up costs. It is a multilayered cache supporting various levels of locking, allowing it to provide high availability. Unsuccessful address queries trigger underlying KNS look-up mechanisms, while blocking access to only one individual listing. JACKAL supports KNS transparently through an intelligent address cache. Message Path Having described the various components of JACKAL, we will trace the path of a received message and the corresponding reply, using the numbered arcs in Figure 4 for reference. The message is first received by a connection thread within a Transport Module (labeled 1 in Figure 4), is processed in the Message Handler, and is transferred directly to the input queue of either a waiting or a new conversation (2). A unique thread manages each conversation. The target conversa- tion awakens, takes the message from its input queue (3), and tries to advance its DFA accordingly. If accepted, the message is entered into the Distributor (4), an internal blackboard for message distribution. The Distributor (5), in turn, tries to match the message with any pending requests in order of a specified priority. Ideally, a match is found, and the message is placed in the appropriate queue or queues (6). This is the point at which the agent gains access to the message flow, through services attending to the blackboard. Once the requesting service thread picks the message out of its queue (7), it presumably performs some action, and may send (8) a reply or a new message; we assume it does. JACKAL arranges for reply requests to be placed into the Distributor before messages are sent, when appropriate. The message is directed into the Conversation Space and traces the same path as the previous incoming message (9,10) to the Distributor. The message is captured by the Switchboard’s outbound message request (11). The Switchboard removes the new message from its queue and assigns it to an idle send thread (12); this results in some overhead but allows sends to proceed concurrently, avoiding bottlenecks due to variation in delivery times. The send thread uses the appropriate Transport Module to transmit the message. Figure 5 depicts the typical design of an agent using JACKAL for agent communication. A main thread serves primarily to start and direct JACKAL and a collection of service threads. Each service thread interacts with the Distributor to send and receive messages, and potentially with the main thread and other service threads as well. Each service thread takes some basic role within the agent, such as processing all broker requests or logging all message traffic to an archival agent. APPLICATION EXAMPLE In this section, we demonstrate how the CIIMPLEX agent system supports intelligent enterprise integration through a simple business scenario involving some real manufacturing management application software systems. Scenario The scenario selected, called “process rate change” and depicted in Figure 6, occurs when the process time of a given operation on a given machine is reduced significantly from its normal value. When this type of event occurs, different actions need to be taken based on the type of operation and the severity of the rate reduction. Some of the actions may be taken automatically according to the given business rules, and others may involve human decisions. Some actions may be as simple as recording the event in the logging file, and some others may be complicated and expensive such as requesting a rescheduling based on the changed operation rate. Two real P/E application programs, namely, FactoryOp (an MES by IBM) and MOOPI (a Finite Scheduler by Berclain), are used in this scenario. Agents To support managing this scenario, we need mechanisms for the following activities: FIGURE 6. “Process rate change” scenario. • collect in real-time information concerning operation completion originated from MES, • compute and constantly update the process rate from the collected information, • detect and notify the appropriate parties if the current rate change constitutes a significant reduction, • decide appropriate actions to handle the rate change, and • carry out the actions A collection of agents is assembled to support the chosen scenario. All of these agents speak KQML and are supported by JACKAL. Besides the three service agents ANS, BA, and GA, the multiagent system also includes the following special agents. • The Process Rate Agent (PRA) is both a mining agent and a monitoring agent for shop floor activities. As a mining agent, PRA requests and receives the messages containing transaction data of operation completion from GA. The data originate from FactoryOp in the BOD format and are converted into KIF format by GA. PRA aggregates the continuing stream of operation completion data and computes the current mean and standard deviation of the processing time for each operation. It also makes the aggregated data available for other agents to access. As a monitoring agent, PRA receives from other agents the monitoring criteria for disturbance events concerning processing rates and notifies the appropriate agents when such events occur. • The Scenario Coordination Agent (SCA) sets the rate-monitoring criterion, receives the notification for the rate change, and decides, in consultation with human decision makers, appropriate action(s) to take for the changed rate. One of the actions would be to request MOOPI to reschedule if it is determined that the rate change makes the existing schedule impossible to meet. This request is sent from SCA as a KQML message to GA, where it is converted into BOD format. Details of the internal logic and algorithms of SCA that handle the “rate change” scenario are reported elsewhere (Tolone et al., 1998). • The Directory Assistance Agent (DA) is an auxiliary agent responsible for finding appropriate persons for SCA when the latter needs to consult human decision makers. It also finds the proper mode of communication to that person. • The Authentication Assistance Agent (AA) is another auxiliary agent used by SCA. It is responsible for conducting authentication checks to see if a person in interaction with SCA has proper authority to make certain decisions concerning the scenario Predicates Three KIF predicates of multiple arguments are defined. These predicates, OP-COMPLETE, RATE, and RATE-CHANGE, are used to compose the contents of messages between agents in processing the process rate change scenario. The OP-COMPLETE predicate contains all relevant information concerning a completed operation, including P/E-Application-id, machine-id, operation-id, starting and finishing time stamps, and quantity. The process time for this operation can then be computed by the difference between the finishing and starting time stamps, divided by the quantity. The RATE predicate contains all relevant information concerning the current average rate of a particular operation at a particular machine with a particular product. The operation rate is represented by its mean and standard deviation over a period of time. RATE instances are computed and constantly updated by PRA, based on a stream of instances of predicate OP-COMPLETE obtained from GA. The RATE-CHANGE predicate contains all the information needed to construct a BOD that tells MOOPI a significant rate change has occurred and a reschedule based on the new rate is called for. In particular, it contains the operation rate used to compute the current schedule and the new rate. It is the responsibility of the rate change SCA to compose an instance of the RATE-CHANGE predicate and send it to GA when it deems necessary to request MOOPI for a reschedule, based on the process rate change notification from PRA and consultation with human decision makers. Additional predicates and more complicated KIF expressions are needed when dealing with more complicated scenarios. Agent Collaboration and Message Flow in the Agent System Figure 7 depicts how agents cooperate with one another to resolve the rate change scenario and sketches the message flow in the agent system. For clarity, ANS and its connections to other agents are not shown in the figure. The message flow employed to establish connections between SCA and DA and AA (brokered by BA) is also not shown. Each of these agents needs information from others to perform its designated tasks. Each of them may also have information others need. Since there is no predetermined stationery connection among agents, the broker agent (BA) plays a crucial role in dynamically establishing communication channels for agents’ information exchange. Advertising to BA GA advertises that it can provide OP-COMPLETE predicate. It also advertises to be able to accept RATE-CHANGE predicate. PRA advertises that it has current process rates available for some operations in the form of RATE predicate. The following is an example of advertise message from GA to BA: (advertise :sender GA :receiver BA :reply-with \langle a unique string as the tag of this message\rangle :content (subscribe: content (ask-one: content (OP-COMPLETE ?× 1 \ldots ?× n)))) **Requesting Recommendation from BA** PRA asks BA to recommend an agent that can provide OP-COMPLETE predicate and will receive the recommendation of GA in a responding tell message. Similarly, SCA asks BA to recommend an agent that can provide RATE predicate and receives PRA in response. It also asks BA to recommend an agent that can accept RATE-CHANGE predicate and receives GA in response. The following is an example of recommend-one message from PRA: (recommend-one :sender PRA :receiver BA :reply-with ⟨ a unique string as the tag of this message ⟩ :content (subscribe :content (ask-one :content (OP-COMPLETE ? × 1 ... ? × n)))) In response, BA sends the following tell message to PRA: (tell :sender BA :receiver PRA :in-reply-to ⟨ the tag of the message to which this message responds ⟩ :content (GA)) Upon the recommendation from BA, an agent can then obtain the needed information by sending ask or subscribe messages to the recommended agent. **Monitoring/Notification** When SCA knows from BA that PRA has advertised that it can provide the current rate for a certain operation, it may send PRA the following subscribe message: (subscribe :sender SCA :receiver PRA :reply-with ⟨ a unique string as the tag of this message ⟩ :language KQML :content (ask-one :language KIF With this message, SCA tells PRA that it is interested in receiving new instances of RATE predicate whenever the mean value of the new rate is less than fifty. This effectively turns PRA to a process rate monitor with the “mean < 50” as the monitor criterion. Whenever the newly updated rate satisfies this criterion, PRA immediately notifies SCA by sending it a tell message with the new rate’s mean and standard deviation. EVALUATION The prototype agent system outlined in the previous section has been installed at IBM’s Boca Raton CIIMPLEX integration center, where a host of P/E application systems, including FactoryOp and MOOPI, is running. The system has been tested successfully. The architecture of this system can easily be applied to handle other types of P/E exception scenarios. For example, we have recently assembled another agent system to manage a different exception scenario in which some BODs sent to an application are out of sequence and need to be resynchronized. Additional experiments have been conducted to test the reliability and scalability of JACKAL. In the experiments, JACKAL is seen to be able to support up to forty-two agents in a ring in a single NT machine (until the machine runs out of memory), and a message takes an average of 0.3 second to traverse the entire ring. JACKAL is also see to easily handle messages of large size. The largest ones we have tested are binary image data of half megabytes. In summary, the prototype system achieves the following, which as discussed in the beginning of this article are essential for manufacturing planing and execution integration: 1. Specialized agents (e.g., PRA, MA, and SCA) are built to provide the functionality needed to manage P/E exceptions in manufacturing. These agents fill the white space in between legacy P/E application systems. 2. JACKAL, the interagent communication infrastructure, is reliable and scalable. It is also easy to use in developing individual agents because it imposes minimum interference between communication and other agent functions. 3. Conversation policies implemented in JACKAL realize, to some extent, the belief-desire-intention (BDI) model intended by KQML. These policies can be used to enforce semantically correct agent-to-agent conversations. 4. The brokering agent (BA) provides necessary support for flexible agent-to-agent collaboration. The gateway agent (GA) provides the interface between the agent world and the application world. They together make runtime collaboration among all modules possible. 5. Ontological support (i.e., the BOD definitions), although very limited at this moment, enables collaboration between agents and legacy P/E systems. In comparison with some existing work of others, our work is more focused on particular needs of the CIIMPLEX integration environment. Works such as ZEUS (Nwana et al., 1998), dMARS based on the Procedural Reasoning System (Georgeff & Lansky, 1987), ADEPT (Jennings et al., 1996), and RETSINA (Sycara et al., 1996) attempt to provide generic agent construction environments and toolkits or general agent architecture. Our work can be seen as more of a point solution, although some techniques from this work, such as JACKAL, can be used as a general-purpose communication infrastructure for KQML-speaking agents. We do not impose a unified architecture for all of our agents. Besides the interface to JACKAL, each agent is free to choose its own internal structure. Currently, we are considering Linda-like shared memory or lightweight blackboard architecture for intraagent (component-to-component) interaction. There is great deal of research activity in developing agent systems for manufacturing and other business applications. Examples of such systems include ADEPT (Jennings et al., 1996) for business management, COOL for supply chain management (Barbuceanu & Fox, 1996), and ABMEI for manufacturing integration (Shen et al., 1998), to mention just a few. A distinguishing characteristic of our work is that we explicitly deal with real legacy P/E systems. Although others discuss legacy systems in principle (Jennings & Wooldridge, 1998; Shen et al., 1998), these legacy systems are seldom included in the actual implementation. Another major difference between our work and some others is in the way the agent collaboration is achieved. COOL (Barbuceanu & Fox, 1996) emphasizes the high-level coordination by negotiation and extends KQML to support the specification of conversations for negotiation.* ABMEI (Shen et al., 1998) uses a network of mediators whose main purpose is to resolve heterogeneity between subsystems. Agents in their systems presumably have the knowledge of what agents to contact when certain needs arise, while in * Like the conversation policies in our system, COOL's conversations are also defined using DFAs. The difference is that their conversations specify negotiation conventions, and our conversations implement intended semantics for KQML performatives based on speech act theory. In other words, they are at different levels of abstraction, and for different purposes: theirs is for specifying what it will be, and ours for ensuring what it should be. Agents for Manufacturing Integration our system, agents do not assume knowledge of what others can do. Each agent advertises services it can provide and announces what services it needs. The service requests match advertsises by the BAs, and the communication between the matched pairs then follows. The Open Agent Architecture (OAA) (Martin et al., 1998) goes further to use a powerful facilitator to coordinate agents’ activities in a system. The facilitator works like a planner. Based on the knowledge stored, it is able to decompose a task received from an agent into subtasks, delegate subtasks to appropriate agents, and monitor and coordinate the executions of these subtasks. A drawback of OAA is that the facilitator is also the communication center. All agents must communicate via the facilitator, while in our system, agents can communicate directly to each other after the brokering matches. CONCLUSION In this article, we presented a multiagent system that is capable of supporting intelligent integration of manufacturing planning and execution, especially in managing the exceptions in business scenarios. With this approach, a set of software agents with specialized expertise can be quickly assembled to help gather relevant information and knowledge and to cooperate with each other and with other management systems and human managers and analysts, in order to arrive at timely decisions in dealing with various enterprise scenarios. This system has been tested successfully with a real manufacturing scenarios involving real legacy MES and schedulers. The work presented here represents only the first step of our effort toward agent-based enterprise integration for manufacturing planning and execution. Further research and experiments are needed to extend the current work and to address its shortcomings. Although KQML does not impose many constraints and requirements on the internal structure of agents, it may be beneficial to have a common framework for the agent’s internal structure within a single-agent system. We are currently considering a lightweight blackboard architecture for such a framework, which among other advantages, may provide flexibility for agent construction, agent component reusability, and plug-and-play. Another research direction under consideration is to increase the functionality of the BA and make it more intelligent. The BA in our current implementation can only conduct brokering activities at the level of predicates. With the help of a machine-interpretable common ontology and an inference engine, more intelligent brokering can be developed to work with object hierarchies and to make intelligent choices. The current ontological support is very limited. It only provides definitions of various BODs and constraints of BOD fields to ensure data consistency. We are currently extending the ontology to include deductive rules for additional interrelations between different BODs and BOD fields to support more complicated business scenarios. Work is also under way to identify more complex enterprise scenarios, which require non-trivial interactions with more legacy systems, and their solutions represent significant added values to the manufacturing production management. REFERENCES Agents for Manufacturing Integration
{"Source-Url": "https://ebiquity.umbc.edu/_file_directory_/papers/1227.pdf", "len_cl100k_base": 9759, "olmocr-version": "0.1.49", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 41789, "total-output-tokens": 12690, "length": "2e13", "weborganizer": {"__label__adult": 0.0006623268127441406, "__label__art_design": 0.0010519027709960938, "__label__crime_law": 0.0008106231689453125, "__label__education_jobs": 0.0022563934326171875, "__label__entertainment": 0.0001513957977294922, "__label__fashion_beauty": 0.0004422664642333984, "__label__finance_business": 0.003734588623046875, "__label__food_dining": 0.0009255409240722656, "__label__games": 0.001331329345703125, "__label__hardware": 0.004364013671875, "__label__health": 0.0006823539733886719, "__label__history": 0.0006761550903320312, "__label__home_hobbies": 0.00041294097900390625, "__label__industrial": 0.045013427734375, "__label__literature": 0.0003695487976074219, "__label__politics": 0.0006203651428222656, "__label__religion": 0.0008320808410644531, "__label__science_tech": 0.361083984375, "__label__social_life": 0.0001513957977294922, "__label__software": 0.06903076171875, "__label__software_dev": 0.50146484375, "__label__sports_fitness": 0.0005245208740234375, "__label__transportation": 0.0031147003173828125, "__label__travel": 0.0004086494445800781}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55675, 0.01304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55675, 0.36676]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55675, 0.90745]], "google_gemma-3-12b-it_contains_pii": [[0, 533, false], [533, 2457, null], [2457, 5153, null], [5153, 8138, null], [8138, 9545, null], [9545, 12468, null], [12468, 14743, null], [14743, 16878, null], [16878, 19566, null], [19566, 22357, null], [22357, 23885, null], [23885, 24747, null], [24747, 27290, null], [27290, 29260, null], [29260, 31963, null], [31963, 33795, null], [33795, 34972, null], [34972, 37415, null], [37415, 39957, null], [39957, 40574, null], [40574, 41667, null], [41667, 43948, null], [43948, 46865, null], [46865, 49746, null], [49746, 53422, null], [53422, 55675, null]], "google_gemma-3-12b-it_is_public_document": [[0, 533, true], [533, 2457, null], [2457, 5153, null], [5153, 8138, null], [8138, 9545, null], [9545, 12468, null], [12468, 14743, null], [14743, 16878, null], [16878, 19566, null], [19566, 22357, null], [22357, 23885, null], [23885, 24747, null], [24747, 27290, null], [27290, 29260, null], [29260, 31963, null], [31963, 33795, null], [33795, 34972, null], [34972, 37415, null], [37415, 39957, null], [39957, 40574, null], [40574, 41667, null], [41667, 43948, null], [43948, 46865, null], [46865, 49746, null], [49746, 53422, null], [53422, 55675, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55675, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55675, null]], "pdf_page_numbers": [[0, 533, 1], [533, 2457, 2], [2457, 5153, 3], [5153, 8138, 4], [8138, 9545, 5], [9545, 12468, 6], [12468, 14743, 7], [14743, 16878, 8], [16878, 19566, 9], [19566, 22357, 10], [22357, 23885, 11], [23885, 24747, 12], [24747, 27290, 13], [27290, 29260, 14], [29260, 31963, 15], [31963, 33795, 16], [33795, 34972, 17], [34972, 37415, 18], [37415, 39957, 19], [39957, 40574, 20], [40574, 41667, 21], [41667, 43948, 22], [43948, 46865, 23], [46865, 49746, 24], [49746, 53422, 25], [53422, 55675, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55675, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
cdaa08b2a164dc3f729a72d5637a6b3d4ae7d2fc
Continuous Delivery Practices in a Large Financial Organization Carmine Vassallo¹, Fiorella Zampetti¹, Daniele Romano², Moritz Beller³, Annibale Panichella³, Massimiliano Di Penta¹, Andy Zaidman³ ¹University of Sannio, Italy, ²ING NL, Amsterdam, The Netherlands, ³Delft University of Technology, The Netherlands Abstract—Continuous Delivery is an agile software development practice in which developers frequently integrate changes into the main development line and produce releases of their software. An automated Continuous Integration infrastructure builds and tests these changes. Claimed advantages of CD include early discovery of (integration) errors, reduced cycle time, and better adoption of coding standards and guidelines. This paper reports on a study in which we surveyed 152 developers of a large financial organization (ING Netherlands), and investigated how they adopt a Continuous Integration and delivery pipeline during their development activities. In our study, we focus on topics related to managing technical debt, as well as test automation practices. The survey results shed light on the adoption of some agile methods in practice, and sometimes confirm, while in other cases, confute common wisdom and results obtained in other studies. For example, we found that refactoring tends to be performed together with other development activities, technical debt is almost always “self-admitted”, developers timely document source code, and assure the quality of their product through extensive automated testing, with a third of respondents dedicating more than 50% of their time to doing testing activities. Index Terms—Continuous Delivery, Continuous Integration, DevOps, Agile Development, Technical Debt, Refactoring, Testing, Test-Driven Development I. INTRODUCTION Continuous Integration (CI) was originally introduced by Grady Booch in 1991 [1], and came into fashion as one of the twelve Extreme Programming practices in 1997 [2]. Fowler defines CI as [3]: A software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. CI has multiple assumed benefits, for example, that integration errors among different components of a software application can be detected earlier, easier, and with less manual effort [4]. At the heart of CI stands a testing phase, possibly in multiple integration environments, in which unit, integration, system, and even acceptance tests can automatically be executed [5], [3]. This is complemented by running Automated Static Analysis Tools (ASATs), e.g., FindBugs, Checkstyle, or JSHint as part of the CI can augment the dynamic testing phase [6]. In addition to these checks of code and system quality, CI is said to improve release frequency and predictability [7], increase developer productivity [8] and improve communication [9], hence reducing the time-to-market and allowing users to benefit from continuous updates of their software. Continuous Delivery (CD) is the development practice that enables frequent releases by help of a CI process [10]. Ståhl and Bosch observed that CI, and by extension CD, have become increasingly popular in software development [11]. However, Ståhl and Bosch observed that there is not one homogeneous practice of continuous integration, indeed there are variations points with the term continuous integration acting as an umbrella for a number of variants [11]. Moreover, they showed that there is no clear insight into how the practice of CD influences other aspects of the development process. The goal of this paper is thus to shed light on the interaction between CI and CD from the aspect of (i) the general development process, (ii) managing technical debt, (iii) testing activities, (iv) technical questions about the CI infrastructure. To bootstrap this investigation, one of the authors spent three months as an intern in a large financial organization, namely ING Netherlands (https://www.ing.nl, in the following referred as ING NL) and observed how their newly adopted CD environment enables developers to run their own operations, called DevOps [12]. Based on these inside observations by an outsider to ING NL, we have designed a survey in which we asked developers about various practices they adopted in their CD pipeline. By consulting and embedding an external technical expert without domain knowledge, ING NL wanted to gain an independent understanding of their process and identify potential areas of improvement with regard to testing and managing technical debt. Paper Structure. Section II provides an overview of the CD pipeline in ING NL. Section III defines the study, formulates its research questions, and details its planning. Then, Section IV reports and discusses the study results. Threats to validity of the conducted studies are then discussed in Section V, while Section VI discusses related literature on CD and build-release management. Finally, Section VII concludes the paper. II. CONTINUOUS DELIVERY IN ING NL ING is a large financial organization with about 94,000 employees and over 67 million customers in more than 40 countries. Nine years ago, ING NL realized the need to fundamentally change the organization of its Information Technology (IT) department. The main rationale was to bridge the gap from the IT and ING NL’s core business. Before that, the IT activities were mainly outsourced, which created managerial effort and costs, while taking resources away from the development. Moreover, the previously adopted development process exhibited a communication gap between the department aimed at “changing the business”, i.e., changing its software, and the department aimed at ‘running the business’, i.e., operating and maintaining the software. Such a gap was mainly bridged by complex processes and procedures for managing changes. This rigor was mainly introduced to ensure stability of the software systems being developed. To “change the business, the focus was on guaranteeing short release cycles. This created conflicting objectives between developers (“Devs”) whose goal it was to meet deadlines, and operators (“Ops”) whose goal it was to reduce the risk of runtime incidents. The development process changed when ING NL decided to introduce a mobile application for online banking, since that long development cycles would have led to an outdated application. For this reason, development activities were changed from the previous outsourcing model to a development process in which the development was internal to the company. When changing the development process, DevOps teams have been introduced. Such teams take charge of the application over its whole lifetime, i.e., during development and operations. The next step was the introduction of a CD pipeline enforcing an agile development process to reduce the testing and deployment effort and duration, especially because such activities were used to be mainly manual work for two separate teams. Fig. 1 depicts the CD pipeline that has been put in place in ING NL. As the figure shows, the pipeline is composed of two layers. A base layer (depicted in the bottom), which is a typical CD pipeline, and an additional layer (top) which deals with continuous monitoring. As soon as the developer pushes a commit, this is detected by the CI server, Jenkins [13], and triggers the software build. Its main task is to run build scripts, mainly Maven scripts, but also, for a minority of projects, Ant, Gradle and other build scripts. Similar to most Open-Source CI builds [5], builds at ING NL are considered broken for a number of reasons, ranging from traditional compiling errors to failing test cases, up to software quality problems – e.g., the presence of a code smell like too high McCabe cyclomatic complexity – detected by ASATs. In ING NL, such the ASAT of choice is SonarQube [14]. In case the build succeeds, the artifacts are stored in the Repository stage using the Artifactory [15]. This introduces several advantages, such as the possibility of implementing caching mechanisms for rapid application re-deployment. Once the Repository stage is reached, the application is ready to be deployed in different environments, i.e., DEV (development), TST (testing), ACC (acceptance), and PRD (production). The monitoring layer in the pipeline collects (top part in Fig. 1) a series of metrics for evaluating the CD pipeline performance. This comprises the three phases of (i) instantiating a CD pipeline, (ii) performing measurements on the pipeline, and (iii) learning from such measurements to further improve the pipeline. The monitoring layer is detailed in Fig. 2. It is composed of one event bus, implemented using Apache Kafka [16], and aimed at collecting events (e.g., build failures or successes) from the pipeline and storing them in a database, implemented using MongoDB [17]. Then, the information stored in the database is utilized by different monitoring tools, shown in the top part of Fig. 2. The system health monitoring tool monitors the pipeline’s software and hardware resources and its primary purpose is ensuring the pipeline’s availability. The automated acceptance criteria tool aims at checking whether the release meets the acceptance criteria defined by the organization, before promoting it to the ACC or PRD stage. The automated team maturity and test analytics tools inform teams about releases (e.g., mean cycle time a team is able to handle) and statistics about test execution, such as the percentage of failed tests. The whole monitoring approach reflects the Lean cycle [18], in which DevOps engineers continuously learn by observing metrics and adapt the pipeline when needed. ING NL has monitored the effect of CD adoption in terms of costs, productivity, and customer satisfaction. In three years, from 2011 to 2013, ING NL has increased the number of delivered function points by 300% and reduced the cost of a single function point to one third. Additionally, between 2012 and 2014, the release frequency has doubled, reaching one release every four days. III. STUDY DESIGN The goal of this study is to better understand the implementation of CD practices in industry, by surveying how software engineers use relevant methods and tools during the development process. The context is the CD pipeline of a large financial organization (ING NL). More specifically, the study aims at addressing the following four research questions: - **RQ1**: *What are general development practices within the Continuous Delivery pipeline?* This research question is preliminary to the ones going deeper into the CD process, and mainly aims at investigating to what extent developers share a common development methodology, and how they plan, schedule, and monitor their development activities. - **RQ2**: *What are the practices adopted to manage technical debt?* This research question aims at understanding how developers manage technical debt by commenting source code, by reviewing it, and by performing any sort of static analysis or metric extraction. - **RQ3**: *What are the testing practices adopted within the Continuous Delivery pipeline?* This research question aims at understanding how testing is framed within the software development process, *e.g.*, whether DevOps adopt a Test-Driven Development approach [19]. - **RQ4**: *How is Continuous Integration performed?* This research question investigates on the developers’ attitude to coordinate changes through the CD infrastructure, including the use of private builds and the priority given to fix build breakages. ### A. Context Selection As a population of candidate participants to the survey, we selected a total of 176 DevOps engineers belonging to various development teams of ING NL. Such participants have been identified through the projects’ mailing lists. ### B. Survey Design The four research questions have been addressed by means of a survey. The survey has been designed by the authors observing the development activities (by looking at the lifecycle of user stories and participating in daily stand-up meetings), and talking with developers to get insights about the CD pipeline and the way it has been implemented in ING NL. The survey also addresses specific knowledge needs at ING NL, triggered by one of the authors who is affiliated with ING NL. The survey is organized into four sections, plus a preliminary section aimed at investigating demographic characteristics of the respondents (age, years of experience, years in ING NL, and technical skills). Overall, it consists of 48 questions, plus five demographics questions. The questionnaire allowed the respondent to select among one or more answers (in most cases multiple answers were allowed), and if needed to provide a textual answer (*i.e.*, by selecting “Other” among the options). In Tables 1–4, we give an abbreviated summarization of the questions we asked developers.\(^1\) Table I reports the questions aimed at addressing **RQ1**. As it can be noticed, besides the first question, mainly aimed at understanding whether DevOps engineers share the methodology being adopted, all other questions clearly refer to agile development practices and in particular to Scrum [20]. For example, we ask questions about sprint planning and user story progress monitoring, but also specific questions about how DevOps manage issues and schedule/perform refactoring actions. We asked specific questions about refactoring as in this study we were particularly interested to understand activities related to technical debt management. Specific questions about managing technical debt – reported in Table II – compose the second part of the survey, aimed at addressing **RQ2**. We ask questions about (i) how developers document source code by means of comments, (ii) how they perform code review, (iii) what kinds of problems do they detect by means of code review and using automated smell detection tools, as well as how they remove problems by means of refactoring, and (iv) whether they perceive that smells are usually introduced because of deadline pressure. The third part of the survey aims at addressing **RQ3** and features questions about testing activities, as shown in Table III. After having asked a question aimed at understanding whether DevOps engineers use TDD, we asked questions about the effort spent on writing test cases and to what extent test cases are kept up-to-date. Also, we ask questions about information and strategies being used to derive test cases for different testing levels. Then we ask questions about test execution (*i.e.*, to what extent is this done within private builds or on the CI server), and how developers assess test effectiveness and deal with low coverage. Finally, the fourth part of the survey addresses **RQ4** and is composed of questions (see Table IV) about (i) promotion policies\(^2\), (ii) how DevOps engineers handled build failures, (iii) how they used branches and (iv) how frequently they pushed their changes. ### C. Survey Operation The survey questionnaire was uploaded onto a survey management platform internal to ING NL, and the candidate participants were invited using an invitation letter explaining the general goals of the survey, its length and estimated time to complete, and highlighting how its results have the purpose of understanding the CD process within ING NL, also in order to identify directions for its improvement. Respondents had a total of three weeks to participate to the survey, and a reminder to those who did not participate yet was sent every week. In total, we obtained 152 filled questionnaires, *i.e.*, we achieved a return rate of 85%. We left respondents the choice not to answer a question. The number of answers for each question is reported in the last column of the tables enumerating the questions. Overall, the median number of responses per question was 129 for **RQ1** --- 1 The original survey with all questions is available at https://figshare.com/s/fa8c4e11fc9fa4b8f8cb 2 A promotion entails the selection of a release candidate and subsequent deployment to the correct environment [21]. ## TABLE I **DEVELOPMENT PROCESS - QUESTIONS (S/M/R STANDS FOR SINGLE, MULTIPLE, OR RANKING ANSWER QUESTION).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q1.1</td> <td>What is your software development methodology?</td> <td>S</td> <td>150</td> </tr> <tr> <td>Q1.2</td> <td>Is the product vision always clear to you? Why? Why not?</td> <td>S,M</td> <td>149</td> </tr> <tr> <td>Q1.3</td> <td>Do you prefer to use a physical board or an electronic one? Why?</td> <td>S,M</td> <td>125</td> </tr> <tr> <td>Q1.4</td> <td>During a sprint why do you add some tasks to the already planned ones?</td> <td>R</td> <td>138</td> </tr> <tr> <td>Q1.5</td> <td>Which is the main topic you address during the sprint retrospective?</td> <td>S</td> <td>138</td> </tr> <tr> <td>Q1.6</td> <td>Which is the average percentage of completed user stories at the end of a sprint?</td> <td>S</td> <td>138</td> </tr> <tr> <td>Q1.7</td> <td>Which Scrum metrics do you usually collect?</td> <td>M</td> <td>128</td> </tr> <tr> <td>Q1.8</td> <td>Which is the main reason why a “done” user story comes back to “in-progress”?</td> <td>S</td> <td>130</td> </tr> <tr> <td>Q1.9</td> <td>Do you consider non-functional requirements as definition of “done” of a user story?</td> <td>S</td> <td>130</td> </tr> <tr> <td>Q1.10</td> <td>Which kind of non-functional requirements do you consider as definition of “done” of a user story?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q1.11</td> <td>You detect a defect that was previously resolved: how to deal with it?</td> <td>S</td> <td>129</td> </tr> <tr> <td>Q1.12</td> <td>Do you usually schedule refactoring tasks? Why?</td> <td>S</td> <td>129</td> </tr> <tr> <td>Q1.13</td> <td>Which priority do you usually assign to refactoring tasks?</td> <td>S</td> <td>128</td> </tr> <tr> <td>Q1.14</td> <td>How frequently are refactoring tasks included in other tasks?</td> <td>S</td> <td>128</td> </tr> <tr> <td>Q1.15</td> <td>Which is the average percentage of scheduled refactoring tasks that are completed at the end of a sprint?</td> <td>S</td> <td>123</td> </tr> </tbody> </table> ## TABLE II **MANAGING TECHNICAL DEBT - QUESTIONS (S/M/R STANDS FOR SINGLE, MULTIPLE, OR RANKING ANSWER QUESTION).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q2.1</td> <td>To what extent do you introduce method and class level comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.2</td> <td>To what extent do you introduce statement level comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.3</td> <td>To what extent do you update code documentation/comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.4</td> <td>Do you perform code review? Why?</td> <td>S,M</td> <td>116</td> </tr> <tr> <td>Q2.5</td> <td>How do you usually detect code smells?</td> <td>M</td> <td>110</td> </tr> <tr> <td>Q2.6</td> <td>Which of those problems do you usually detect? (null pointers, interface misuse, memory leaks, unreachable code, unused variables, uninitialized variables)</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.7</td> <td>Which of these bad design/implementation choices do you usually detect during code reading? (function having huge size, method with many responsibilities, high module coupling, module exposing its attributes)</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.8</td> <td>Which source code metrics do you usually look at?</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.9</td> <td>Do you sometimes do poor implementation choices because of near deadline?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.10</td> <td>Do you usually use a tool in order to do code refactoring? Why?</td> <td>S</td> <td>116</td> </tr> </tbody> </table> ## TABLE III **TESTING - QUESTIONS (S/M/R STANDS FOR SINGLE, MULTIPLE, OR RANKING ANSWER QUESTION).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q3.1</td> <td>Do you use TDD (Test Driven Development)? Why/why not?</td> <td>S,M</td> <td>125</td> </tr> <tr> <td>Q3.2</td> <td>Which percentage of your time do you spend on writing tests?</td> <td>S</td> <td>124</td> </tr> <tr> <td>Q3.3</td> <td>How frequently do you review and (if necessary) update the tests for every change to production code?</td> <td>S</td> <td>124</td> </tr> <tr> <td>Q3.4</td> <td>Do you usually test the code written earlier by others? Why (not)</td> <td>S,M</td> <td>124</td> </tr> <tr> <td>Q3.5</td> <td>Which strategy do you usually use to categorize inputs for each test case?</td> <td>S</td> <td>122</td> </tr> <tr> <td>Q3.6</td> <td>Which information do you need in order to perform Unit Testing?</td> <td>M</td> <td>122</td> </tr> <tr> <td>Q3.7</td> <td>Which information do you need in order to perform Integration Testing?</td> <td>M</td> <td>122</td> </tr> <tr> <td>Q3.8</td> <td>Do you usually automate the generation of the test cases?</td> <td>S</td> <td>122</td> </tr> <tr> <td>Q3.9</td> <td>In which kind of testing do you usually automate the generation of the test cases?</td> <td>M</td> <td>21</td> </tr> <tr> <td>Q3.10</td> <td>Which kinds of testing are executed automatically? Why (not)?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q3.11</td> <td>Where do you test code?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q3.12</td> <td>Which percentage of written tests are executed?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.13</td> <td>Do you always run all test cases together? Why?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.14</td> <td>How frequently do tests pass?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.15</td> <td>Which types of code coverage do you measure?</td> <td>M</td> <td>107</td> </tr> <tr> <td>Q3.16</td> <td>Which is the average percentage of code coverage that you usually score during unit testing?</td> <td>S</td> <td>103</td> </tr> <tr> <td>Q3.17</td> <td>How do you deal with low coverage?</td> <td>S</td> <td>103</td> </tr> <tr> <td>Q3.18</td> <td>Which of those test metrics do you find useful?</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q3.19</td> <td>How do you react to a failure?</td> <td>R</td> <td>116</td> </tr> </tbody> </table> ## TABLE IV **CONTINUOUS INTEGRATION - QUESTIONS (S/M/R STANDS FOR SINGLE, MULTIPLE, OR RANKING ANSWER QUESTION).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q4.1</td> <td>Promotion policies: what do you do when you are ready to push code on the master branch?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.2</td> <td>How do you deal with failures at building/packaging time?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.3</td> <td>Branching issues: how do you deal with parallel development?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.4</td> <td>When do you usually push your changes?</td> <td>S</td> <td>112</td> </tr> </tbody> </table> questions, 116 for RQ2, 120 for RQ3 and 112 for RQ4. Only for one question (Q3.9, dealing with specific aspects of test automation) the number of answers was below 100, i.e., 21. Both the overall return rate and the return rate for the single answers are higher than typical return rates for software engineering surveys conducted in industry, which often range between 10% and 25% [22], [23]. The high return rate gives us confidence that our survey accurately reflects the opinion of the sampled developers. IV. STUDY RESULTS In this section, we highlight key results of our study that directly address the research questions from Section III. A. Respondents’ demographics Table V and Fig. 3 report demographics information about the study respondents, and namely their age, years of experience, years spent in ING NL, and their main skills (multiple answers were allowed). Most of the respondents are relatively senior both in term of age and development experience (the majority of them has an age between 30 and 50, and over 11 years of experience). The main technological expertise they possess are related to Java or JavaScript programming, and both relational and NoSQL databases. B. RQ1: What are general development practices within the Continuous Delivery pipeline? Methodology. When we asked about the kind of methodology being adopted in the development process (Q1.1) almost all developers (97%) mentioned they use on Scrum as development methodology. At the same time, the product vision (Q1.2) is clear to 68% of the respondents only. One important reason for the lack of clarity is due to frequent changes, which are pretty common in agile development. Interestingly, while most of the respondents (69%) prefer to use an electronic Scrum board (Q1.3), there is a quite high percentage (31%) still preferring a physical Scrum board. On the one hand, they say that an electronic Scrum board facilitates distributed team work (84%), and provides automated calculation of sprint progress metrics (59%). On the other hand, a physical board is always visible in the room (90%), and improves the team cohesion (64%). Sprint Management. Developers declared that, during a sprint, they add some tasks to the already planned ones (Q1.4). As a main reason for that, 60% of them indicate bug fixing, followed by missing detailed requirements (33%) and only 7% mentioned high-level, business requirements missing during the planning. During the sprint retrospective (Q1.5), i.e., the meeting in which the sprint activities were discussed in order to understand what went well, what went wrong, and how things can be improved for the next sprint, developers mainly discuss and try to harmonize the way they work (88%). Few responses concern bad implementation (1%), the product not meeting functional (1%) or non-functional (1%) requirements, and other issues (7%). Fig. 4 reports the average percentage of completed user stories at the end of a sprint (Q1.6). In most cases, respondents agree that no less than 80% of user stories are completed. Other than dealing with functional requirements, user story completion concerns with dealing with different kinds of non-functional requirements, where developers consider as high priority requirements security (89%), reliability (86%) and maintainability (82%). The main monitoring mechanisms for the sprint progress (Q1.7) are the sprint burn-down (60%, tracking the sprint completion), and the velocity, i.e., the number of story points [24, page 87] per hour (58%). A small percentage of respondents consider the number of defects postponed (3%), or the technical debt occurred (7%) as important indicators which are able to influence the completeness of a user story. In some cases, a completed user story may be rolled back to “in-progress” (Q1.8), but mainly because developers realize that functional (34%) or non-functional (25%) requirements are not completely implemented. Only in 22% of the cases does this occur because of changes in users’ expectations. 7 respondents (5%) explicitly specified that in case they realize changes in requirements, e.g., because of changed users’ expectations – they rather open a new user story than reopening a previously closed one. One respondent even clarified that a “done” user story should be considered to be in production already, and therefore should not be reopened again. When a previously resolved defect occurs again (Q1.11), 52% of the respondents indicate that they open a new issue anyway. This can either indicate a careful approach in which developers try to keep the new occurrence of the defect separated from the previous one. **Refactoring activities.** When we asked about refactoring tasks (Q1.12), 64% of respondents indicated that refactoring is usually properly scheduled. The main reasons for refactoring include improving program comprehension (87%), allow making changes easier (77%), and help to find bugs (24%). Those who not schedule refactoring tasks, they do it either because they are too time consuming and take effort away from feature implementation tasks (27%), or because they do not clearly perceive refactoring advantages (9%). A large proportion of respondents (64%) indicate other reasons. For example, they mentioned that “refactoring is just performed as it pops up”, that they “naturally consider refactoring as part of other development tasks”, or that “code should be made maintainable right away”. Also, some respondents indicated planning reasons, i.e., part of the user story effort calculation. Last, but not least, someone indicates that all depends on the size of the refactoring activity to be performed is, i.e., small refactorings are performed together with development, whereas larger ones are kept separate. When being scheduled (Q1.13), refactoring tasks often have a medium priority (70%) than other tasks, with 9% assigning a high priority and 23% a low priority. Indeed, 42% of respondents indicate that more than 80% of the planned refactorings within a sprint are actually completed (Q1.15). Differently from what Fowler reported [25], refactoring tasks are often performed together with other tasks, as shown in Fig. 5 (Q1.14). Only 5% of respondents declare that they clearly separate refactoring from other tasks. **C. RQ2: What are the practices adopted to manage technical debt?** **Source code comments.** The first block of questions we asked about managing technical debt concerned the way and the extent to which developers comment source code. Respondents said they almost always (23%), often (34%), and sometimes (24%) introduce class-level and method-level comments (Q2.1). Instead, as expected only 3% and 15% of the respondents introduce statement-level comments always and often, respectively (Q2.2). Still, 38% of the respondents introduce them sometimes. In line with the CD process, and with the aim of preserving program understanding, 79% of the respondents’ update comments immediately when changing the source code, while 13% postpone such changes to a specific phase aimed at producing/updating documentation (Q2.3). **Code reviews.** Code review (Q2.4) is adopted by almost the whole set of respondents (95%) and, as shown in Fig. 6, the obvious purposes are detecting bad smells (90%) and finding defects (81%). However, code review is also used a lot to share code knowledge (85%), or to find alternative ways for implementing a feature (75%). These results are partially in line with the observations on the code review process at Microsoft [26] and on open-source projects [27]. At Microsoft, finding defects was the most important motivation, followed by code improvement and finding alternative solutions, while sharing code ownership was only ranked seventh. **Analysis of bad code smells.** Respondents indicate (Q2.5) that code reviews are the premier way for detecting code smells (92%), while 63% of the respondents also use static analysis tools\(^4\). The main problems detected (Q2.6) either by means of automated or manual code review are reported in Fig. 7 (a): the majority indicated as main problems detected unused (78%) or uninitialized (62%) variables, null pointers\(^5\) \(^4\)Due to confidentiality reasons, we cannot disclose the list of tools being used. \(^5\)Including null references in languages not directly using pointers, e.g., Java. Fig. 7. Problems detected by automated and manual code review. (a) Q2.6 – Software defects Unused variables 78% Use of uninitialized variables 62% Null pointers/references 62% Unreachable code 61% Interface misuse 33% Memory leaks 24% (b) Q2.7 – Bad design choices Large (function) size 75% Low cohesion 70% High coupling 49% Lack of encapsulation 34% Other 13% Fig. 8. Metrics collected to monitor source code quality. Amount of duplicated codes 78% Cyclomatic complexity 69% Number of function parameters 51% Lines of Code (LOC) 44% Comment words 18% Number of source files 16% Other 15% Fig. 9. Q3.1 – Adoption of Test-Driven Development. Always 34% When I have time 33% Only for certain kinds of systems 12% No 22% D. RQ3: What are the testing practices adopted within the Continuous Delivery pipeline? Test-Driven Development (TDD) and Testing in general. TDD is the practice of “driving development with tests” [35]. As reported in Fig. 9, 34% of the respondents say they always use TDD (Q3.1). 33% answered they use TDD for certain kinds of (sub) systems, and 12% use it when time pressure allows. 22% do not use TDD at all. Respondents reported to adhere to a TDD style when they can create or have existing unit (96%), integration (53%), acceptance (25%), or performance (15%) tests for the functionality they are about to implement. Reasons for not using TDD are mainly related to TDD not being directly applicable for many types of code changes, e.g., when developing graphical user interfaces (59%), which triggers the need for other kinds of tools, such as capture-replay tools. Another important reason was TDD’s time consuming nature (33%). Regarding testing in general, 47% of the respondents allo- cate between 25% and 49% of their time for testing (Q3.2), and 31% more than 50% of their time. Developers in the WatchDog study [35] estimated to spend on average around 50% of their time on automated, codified testing, very closely resembling the estimates in our study. One may wonder how accurate developers’ self-estimations are and whether developers who claim to use TDD do indeed apply it. Beller et al. [35] found in their WatchDog study that developers spent a quarter of the work time on testing (instead of half, which they originally estimated), and that, even when they reported that they were using TDD, developers practically never applied it strictly [35]. A similar observational study on developers’ testing habits could identify whether and how these findings apply in our given context. Casual evidence from another context (not at ING NL) suggests that, some developers were referring to acceptance testing with the Framework for Integrated Testing (FIT) [36] as TDD, but meant Behavior-Driven Development (BDD) [37]. Generally, our survey answers suggest that quality assurance through testing is a crucial concern at ING NL. A significant amount of manual work is required for TDD in particular and testing in general. Automated tool support, including test case generation, might help further reduce it. When asking a specific question on automation of test generation (Q3.8, Q3.9), 17% of the respondents indicated they use some techniques and tool to automate test case generation. A factor that highlights the cost of testing and that TDD may indeed be followed is the answering to the question of continuous updating of test suites for every change (which is in line with the idea of CI). Most of the respondents claim they almost always (58%) or often (28%) update tests when changing production code (if necessary). Testing strategies and criteria. We found that developers make use of specific testing strategies such as black box testing relatively seldom (Q3.5). 52% of the respondents say they do not use any strategy. As regard black box testing, only 20% and 19% use equivalence class testing and category partitioning [38] criteria respectively. Regarding white box testing, the main criteria being used (Q3.15) are statement coverage (94%), branch coverage (84%), multiple condition coverage (68%), and in some cases path coverage (42%). Most of the respondents picked multiple options indicating that depending on the feature under test, they choose whichever strategy is most suitable. Overall, about statement coverage (Q3.12), 84% of the respondents indicated they try to achieve a coverage level of at least 80%. Other than that, as it is shown in Fig. 10, developers rely on a number of different metrics, mostly the number of failed/passed/blocked test cases (77%) but, for example, also related on how well test cases cover user stories (27%). For unit testing purposes, test cases are often written using (Q3.6) requirements for black box testing (78% of respondents) and source code for white box testing (80%). Only 24% of respondents rely on models. As for integration testing (Q3.7), code is less used (43%) while developers mainly rely on module interfaces (66%). E. RQ4: How is Continuous Integration performed? The first question we asked (Q4.1) was about the use of testing in private builds before opening a pull request. As one can expect, results indicate how the use of CI changes the promotion management policies one may adopt. While in principle [39] one can be tempted to promote code as long as it compiles, with CI developers are encouraged to perform some tests (e.g., unit testing) in the private builds. Indeed, 97% of the respondents indicated they actually do it, while only 3% let the CI perform all tests when builds are performed. In case of build breaking changes (Q4.2), 96% of the developers confirmed that they interrupt their implementation activities and focus on fixing the build. To minimize conflicts, the majority of respondents (62%) create a feature branch and merge it later in the master branch, even if only 22% of them perform a daily merge (Q4.3). Regarding the frequency of pushing changes in the master branch (Q4.4) results indicate that 60% of developers push changes whenever a small piece of a task is completed, while 30% do it only when a whole task is completed. Only few respondents (10%) push changes more than one time in a week. V. Threats to Validity Threats to construct validity concern the relationship between theory and observation. In a survey, such threats may mainly occur because respondents could possibly interpret a question in a different way than it has been conceived, possibly producing misleading results. For example, when answering to Q3.1, and as explained in Section IV-D, it is possible that developers believe they are applying TDD, while this is not the case. Whenever possible, the quantitative findings obtained with the survey were confirmed by the observations made by one of the authors, who observed the ING NL development process for three months. Possibly, the most suitable way of complementing the survey would have been a follow-up live interview or a longitudinal study, which is plan for future work. Threats to internal validity concern with factors that could have influenced our results. One such factor could be the evaluation apprehension [40]. For example, answers to Q2.9 indicated that deadline pressure is not a major cause for poor implementation choices. Another threat is related to the survey return rate. We have shown that the overall return rate is quite high (85%), and generally higher than other surveys conducted in the area of software engineering. Threats to external validity concern the generalization of our findings. The obtained findings are clearly and intendedly confined to the specific case of ING NL, and may or may not generalize to other organizations, even within the same domain. In some cases, e.g., for the use of code reviews, we have shown how our results confirm what seen in other organizations [26]. VI. RELATED WORK In recent years, researchers have conducted different studies on the adoption of CI and CD in industry and open source. Experience reports. Laukkanen et al. [41] interviewed 27 developers at Ericsson R&D to understand their perception of CI. They observed that developers face many technical and social challenges when adopting CI, such as the test infrastructure. An industrial experience report from Kim et al. [42] details a CI setup at the package level, rather than at source code line level, hence increasing the responsibility of package maintainers. Stål and Bosh [11] conducted a literature review on CI practices and found that different software development projects use different CI implementations because of several contextual factors such as size, longevity, budget, competences, organizational structure, or geographical distribution. This suggests that contradicting elements in the results of our survey when compared to other studies can possibly be explain by variations in context. Build failures. A challenge in CI is dealing with build failures, which might negatively impact developers’ productivity. Thus, researchers have investigated the most common causes of these failures. For example, Miller [8] at Microsoft reported that, for the Service Factory system, build failures are mainly due to compilation failures, failing tests, static analysis tool issues, and server failures. See et al. [43] at Google found that most failures are due to dependency-related issues between software components. In contrast, Beller et al. [5] analyzed build failures due to test executions. In particular, they found that testing is an important part in CI and it is also the most frequent reason for build failures. Benefits of CI practices. Other researchers have investigated the effect of CI on code quality and developers’ productivity. For example, Miller [8] reported that for the Service Factory system the CI cost was about 40% of the cost of an alternative (non-CI) process achieving the same level of quality. Deshpande and Riehle [44] analyzed commit data from open source projects and found that, differently from industrial development, in open source the adoption of CI has not yet influenced development and integration practices. However, Vasilescu et al. [45] mined GitHub projects and found that CI makes teams more productive and improves the likelihood of pull request mergers, without sacrificing the projects’ quality. Tools and techniques. Brandtner et al. [46] focus on improving common CI practices, in particular, they developed a platform that dynamically integrated data from various CI-tools and tailors the information for developers. In other work, Brandtner et al. [47] propose a rule-based approach to automatically profile stakeholders based on their activities in version control systems and issue tracking platforms, platform, namely SQA-Mashup, which dynamically integrates data from various CI-tools and tailors the information for developers. Elbaum et al. [48] presented regression test selection techniques to make continuous integration processes more cost-effective. While the studies described above focused on CD experience itself or introducing new tools and techniques, our survey conducted in ING NL focuses more on the development practices within the CD pipeline, with a particular emphasis on how DevOps engineers manage technical debt and perform testing. VII. CONCLUSIONS This paper reported results of a survey – conducted with 152 developers of a large financial organization (ING Netherlands) – about their use of Continuous Delivery. The survey featured questions about (i) the development process and task management, (ii) managing technical debt, (iii) testing, and (iv) Continuous Integration activities. The main findings of the survey suggest that: • While refactoring is properly scheduled, contrarily to both common wisdom and to Fowler stated [25], it is often performed together with other development activities, as it is considered as part of a user story effort, and this prevents to release poorly maintainable source code. • Respondents tend to “self-admit” technical debt when writing source code, in order to be able to fix it when possible. Instead, they reject the hypothesis that such smells are introduced because of deadline pressure. Then, they use both code reviews and automated tools to identify and refactor code smells. • The majority of developers mention they use TDD, although we do not know whether they are strictly applying TDD. At the same time, quality assurance in the form of (manual) testing requires a significant portion of the allocated time for a sprint. • The use of a Continuous Integration infrastructure encourages developers to test their changes using private builds, and to give very high priority to fix build breakages. In conclusion, our survey-based study shows how practices such as TDD or the identification and refactoring of bad smells (with the help of automated tools) are put in practice in a large organization as ING NL, sometimes confirming common beliefs, sometimes contradicting them. This study requires replications in other organizations, and needs to be complemented with other studies, e.g., case studies, controlled experiments and longitudinal field studies, in which developers’ activities can be closely observed to have a better understanding of their behavior when working within a CD pipeline. ACKNOWLEDGMENTS The authors would like to gratefully thank all the study participants as well as all developers from ING NL that provided precious inputs for the planning of this study. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/9159936/vassalloICSME2016.pdf", "len_cl100k_base": 10193, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36309, "total-output-tokens": 13146, "length": "2e13", "weborganizer": {"__label__adult": 0.00038814544677734375, "__label__art_design": 0.0002548694610595703, "__label__crime_law": 0.00031685829162597656, "__label__education_jobs": 0.0016241073608398438, "__label__entertainment": 4.094839096069336e-05, "__label__fashion_beauty": 0.00016260147094726562, "__label__finance_business": 0.0004045963287353515, "__label__food_dining": 0.00030517578125, "__label__games": 0.00043582916259765625, "__label__hardware": 0.0005016326904296875, "__label__health": 0.00036787986755371094, "__label__history": 0.00017380714416503906, "__label__home_hobbies": 7.849931716918945e-05, "__label__industrial": 0.00030231475830078125, "__label__literature": 0.0002014636993408203, "__label__politics": 0.00025463104248046875, "__label__religion": 0.0004019737243652344, "__label__science_tech": 0.00206756591796875, "__label__social_life": 9.763240814208984e-05, "__label__software": 0.0034580230712890625, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00029921531677246094, "__label__transportation": 0.0003914833068847656, "__label__travel": 0.00019156932830810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53901, 0.0383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53901, 0.29665]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53901, 0.92136]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5502, false], [5502, 10412, null], [10412, 16373, null], [16373, 23435, null], [23435, 27168, null], [27168, 31811, null], [31811, 33538, null], [33538, 39034, null], [39034, 45104, null], [45104, 53901, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5502, true], [5502, 10412, null], [10412, 16373, null], [16373, 23435, null], [23435, 27168, null], [27168, 31811, null], [31811, 33538, null], [33538, 39034, null], [39034, 45104, null], [45104, 53901, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53901, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53901, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5502, 2], [5502, 10412, 3], [10412, 16373, 4], [16373, 23435, 5], [23435, 27168, 6], [27168, 31811, 7], [31811, 33538, 8], [33538, 39034, 9], [39034, 45104, 10], [45104, 53901, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53901, 0.22672]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
9eafc24bebfe447c4b4528d5240d9a6f5cc94f1a
Abstract: In this paper, we suggest a static study exercise for discovery various newly exposed application weaknesses such as cross-site scripting (XSS): Persistent, SQL shots and HTTP ripping aggressor. These exposures branch from unbridled input, which stands broadly expected as the utmost common source of security vulnerabilities in applications. We recommend a static analysis methodology constructed on an accessible and accurate steps-to-study. Popular methods, handler delivered conditions of vulnerabilities are spontaneously converted into static analyzers. In our methodology finds entirely vulnerabilities identical a requirement in the popular statically analyzed program. Consequences of our static analysis remain accessible towards the handler aimed at assessment in a reviewing interface unified inside Eclipse, in a widespread Java development platform. Our static study originates safety susceptibilities in extensive open-source applications and likewise occurs in extensively used J2EE collections. Keywords: Software Development, Security Vulnerabilities, Static Analysis and Dynamic Analysis, Attacks Context-Sensitive pointer Analysis. 1. INTRODUCTION The refuge of Java applications has developed progressively significant in the preceding era. More Web established enterprise applications agreement with delicate commercial and medicinal information, in totaling to interruption can unpleasant billions of moneys in harms. It is essential to safeguard these applications from hacker aggressor. Various developments in the ancient attentive on protecting against difficulties affected by the unsafe nature of C, such as buffer overruns and format string vulnerabilities [1, 2, 3]. Still, in modern centuries, J2EE has appeared as the semantic of selected for constructing great complex Web constructed schemes, in portion as of semantic protection structures that prohibit uninterrupted memory admission and reduce difficulties such as buffer overruns. J2EE also encouraged the implementation of Java as a semantic for executing e-commerce system such as bank application, Web application, etc. A classic Web system receives involvement from the consumer browser and work together with a backend system database to assist customer needs: J2EE collections make these shared responsibilities easy to code. Still, notwithstanding Java language’s protection, it is thinkable to make reasonable programming mistakes that prime to vulnerabilities such as SQL injections [4, 5, 6] and cross-site scripting aggressor [7, 8, 9]. Modest coding fault can permission a Web system exposed to unlawful information admission, wildcat bring up-to-date or removal of information, and system smashes prominent towards rejection of provision aggressor. A. Sources of Weaknesses Weaknesses recognized in Web system, issues affected by unrestricted response are accepted as presence of the utmost corporate [11]. To adventure unrestricted input, an aggressor desires to accomplish two areas: Shoot poisonous information to the Web system. Shared approaches comprise: • URL handling: use particularly constructed limitations to be presented to the Web application as portion of the URL. • Hidden attribute handling: established concealed attribute of HTML methods in Web sides to poisonous standards. • HTTP header medling: handle portions of HTTP requests directed to the application. • Cookie poisoning: domicile poisonous information in cookies, minor files directed to Web established system. • Parameter medling: pass particularly constructed poisonous standards in fields of HTML methods. Handle system by means of poisonous information. Common approaches comprise: • SQL shot: permit response comprising SQL instructions to a database system for performing. • Cross-site scripting (XSS): exploit system that yields unrestricted input precise to fake the user into performing poisonous scripts. • HTTP input splitting: exploit system that yields response precise to execute Web page damages or Web cache malicious aggressor. • Path traversal: exploit unrestricted customer response towards mechanism which records are retrieved on the system. • **Command Line injection**: exploit customer response to perform shell instructions. **B. Program Reviewing for Security** Various aggressors Explained in the earlier section can be identified through program reviewing. Program reviews identify prospective vulnerabilities earlier an application is run. In situation, utmost Web system improvement practices endorse a safety audits or evaluation phase as a distinct improvement stage afterwards testing and beforehand system distribution [10, 11]. Program reviews, however acknowledged as one of the utmost active protection approaches [12], are time overwhelming, expensive, and are consequently executed irregularly. Security reviewing involves security proficiency that utmost developers do not have, so security reviews are frequently accepted available through external security authorities, thus adding to the charge. In addition to this, new security mistakes are frequently announced as ancient ones are improved; double-inspections (reviewing the program twice) are extremely endorsed. The existing condition calls for improved tools that assistance developers evade announcing vulnerabilities throughout the development phase. **1.1. Static Analysis** In this paper recommends an instrument based on a static analysis for finding vulnerabilities affected through unrestricted input. Customers of the instrument can designate susceptibility configurations of curiosity concisely in PQL [13], which remains a cool to use code demand semantic contained by Java language rules. Our instrument, as presented in Figure 1, implements user-identified requests to Java byte code and catches all possible gibe statically. The outcomes of the study are incorporated into Eclipse, a common open source Java development platform [14], creating the possible vulnerabilities easy to inspect and fix as measure of the development method. The benefit of static analysis is that it can find entirely possible security destructions without executing the request. The practice of byte code level study avoids the essential for the source program to be accessible. In our instrument is representative popular that the aforementioned is constructed happening on an exact situation based indicator learning that takes continued visible towards gauge towards enormous systems [15]. This grouping of scalability and precision permits our study to find all vulnerabilities gibe a requirement inside the portion of the program that is studied statically. In distinction, earlier practical tools are classically unreliable [16, 17]. Deprived of a precise study, these tools would find moreover numerous possible mistakes, so they only report a subclass of faults that are probable to be actual problems. As a consequence, they can miss significant vulnerabilities in code. ![Figure 1: Framework of static analysis study](image) **1.2. Paper Organization** In this paper we systematized as follows. Subdivision 2 refers to comprehensive Background of Java system security susceptibilities. Subdivision 3 refers to related work. Subdivision 4 refers towards our static analysis practice and improvements that growth analysis accuracy and exposure. Subdivision 5 refers to experimental findings, and Subdivision 6 refers to concludes. **2. LITERATURE REVIEW** In this paper we emphasis on a diversity of security vulnerabilities in Java applications that are affected by unrestricted input. Present intellects embrace SQL shots in Oracle supplies [18] and cross-site scripting (XSS) susceptibilities in Firefox [19]. Rendering to a prominent analysis executed by the Open Web Application Security Assignment [11], invalidated input is the highest security issue in Web applications. **2.1. SQL Injection** SQL injections are affected by unrestricted user input existence accepted to a back-end database for execution [4, 5, 6, 20, 21, 22]. The hacker might entrench SQL commands into the information he directs to the application, prominent to accidental activities executed on the back-end database. When victimized, a SQL injection might induce wildcat access to delicate information, updates or deletions from the database. The beneath program extract acquires a name (UName) by means of begging Object.getParameter ("EName") and practices it towards create a request to be delivered to a database for implementation (Connection.execute (Query)). This apparently acquainted portion of program might permit an aggressor to acquire access to wildcat information: uncertainty an aggressor takes complete covered of string UName grown as of an HTTP call, for example established it to "OR 1 = 1;--. Two dashes are towards designate comments remarks in the Oracle language of SQL, therefore the WHERE section of the request efficiently suits the repetition name. © 2013, IJARCSSE All Rights Reserved Saravana et al., International Journal of Advanced Research in Computer Science and Software Engineering 3(9), September - 2013, pp. 1020-1030 = 1 OR 1 = 1. This permits the aggressor to evade the label check and acquire admission to all customer histories in the database system. SQL shot is nevertheless individual of the susceptibilities that can remain uttered as defiled object propagation troubles. In this situation, the input variable UName is deliberated defiled. Uncertainty a defiled object (the basis or any other object consequent from it) is delivered by way of an arguments to Connection.execute (the sink), at that moment the susceptibility. Violence classically comprises of two slices: - Injecting poisonous information hooked on the application and - Using the information to manipulating the application. Example 1: SQL shot is exposed below: ``` HttpServletResponse Req = ...; String UName = Req.getParameter ("EName"); Connection Connection =...; String Query = "SELECT * FROM Users "+'" WHERE name = "'+UName +"'"; Connection.execute (Query); ``` 2.2. Injecting Poisonous Information Defensive Web system contrary to unrestricted response susceptibilities is challenging since system can acquire data from the customer in a diversity of not the same methods. One necessity to verify all bases of customer organized information such by way of HTTP headers, form arguments and cookie standards methodically. Though frequently used, client-side filtering of venomous standards is not an effective resistance approach. 2.2.1. Arguments Meddling The utmost shared method for a Web system to receive arguments is over and done with HTML methods. As soon as a method is yield to, arguments are directed as portion of an HTTP call. An aggressor can simply meddle through argument delivered towards a Web system by arriving poisonously constructed values into text attribute of HTML methods. 2.2.2. URL Meddling Designed for HTML methods those are yield to by the HTTP GET way, method arguments as fine as their standards seem as slice of the URL that is retrieved afterwards the method is yield to. An aggressor might straight control the URL string, entrench poisonous information in it, and at that time admission this new URL to yield to poisonous information to the system. Example 2: Well thought-out a Web page at a bank request site that permits a genuine customer to choice one of accounts as of a list and debit $5 lakh since the account. As soon as the submit key is forced in the Web browser, the subsequent URL is demanded: ``` http://....../account?AccountNumber=532089143&Debit_Amount=500000 ``` Nevertheless, if no additional protections are engaged by the Web application acceptance this call, retrieving the below query might in fact increase the account balance to $ 5 core ``` http://....../account?AccountNumber=532089143&Debit_Amount=-5000000 ``` 2.2.3. Hidden attribute Handling HTTP is stateless, numerous Web applications practice hidden areas to simulate continuity. Unseen areas remain fair method attribute made unseen to the end customer. Example 3: For example, deliberate an instruction form that comprises hidden areas to collection the value of substances in the shopping cart: ``` <input type="hidden" name="rate" value="30.00"> ``` A classic Web site by numerous forms, such as an online store wills possible trust on unseen areas to handover state information between pages. Dissimilar steady attributes, hidden attributes cannot be altered straight by capturing standards into an HTML method. Nevertheless, meanwhile the hidden attributes is slice of the page basis, saving the page, erasure the hidden attribute significance, and refilling the page resolve origin the Web system to accept the afresh updated significance of the hidden attribute. 2.2.4. HTTP Header Handling HTTP headers classically continue undistinguishable to the customer and are carried only through the browser and the Web system. Nevertheless, some Web applications practice these headers, and aggressors can introduce poisonous information into applications over theme. For example, the Referer attribute, which comprises the URL demonstrating where the call originates from. This area is normally confidential through the Web application, nevertheless can be effortlessly hammered through an aggressor. The aforementioned is potential to handle the Referrer attributes significance carried in a mistake page or for transferal to support XSS or HTTP reply unbearable aggressor. 2.2.5. Cookie Harming Cookie harming aggressor comprise of changing a cookie, which is an insignificant file accessible to Web applications stored on the user’s workstation [23]. Various Web system practice cookies to accumulation data such as user credential login and password sets and customer identifiers. This data is frequently generated and stored on the user’s workstation subsequently the early collaboration through the Web application, such as staying the application login page. Cookie harming is a distinction of header handling: poisonous response can remain went across into system over standards stored inside cookies. For the reason that cookies are apparently undistinguishable to the customer, cookie harming is frequently more hazardous in practice than supplementary methods of arguments or header handling aggressor. 2.2.6. Non-Web Input Data Venomous information can likewise be gone across in as command-line parameters. This issue is not as significant as classically only administrators are permissible to perform modules of Web established system straight as of the command instruction. 2.3. Exploiting Unrestricted Input Once venomous information is injected into an application, an aggressor might practice one of various methods to yield benefit of this information. 2.3.1. SQL Shots When victimized, a SQL injection might origin a diversity of significances as of leaking the construction of the back end database system to injecting new customers, dispatching secret code towards the hacker. Various SQL injections can be averted comparatively straightforwardly through the practice of improved APIs. J2EE distributes the PreparedStatement class, which permits agreeing a SQL declaration pattern with ‘?’s representing statement parameters. Prepared SQL instructions are precompiled, and stretched arguments not ever grow into slice of executable SQL. Nevertheless, not by means of or inadequately using prepared statements instruction still leaves abundantly of room for mistakes. 2.3.2. Cross-site Scripting (XSS) Weaknesses Cross-site scripting happens when vigorously created Web pages demonstration input that has not remained correctly authenticated [7, 24, 8, 9]. An aggressor might entrench venomous JavaScript program into vigorously created pages of reliable sites. When performed on the system of a customer who feelings the page, these instructions might hijack the customer account authorizations, alteration customer settings, take cookies, or add undesirable content into the Web page. 2.3.3. HTTP Reply Splitting HTTP reply splitting is a universal method that permits numerous innovative aggressors including Web page cache harming, cross customer destruction, delicate Web page hijacking, as fine as XSS [25]. Through delivering unanticipated line break CR and LF typescripts, an aggressor can origin two HTTP replies to be created for one poisonsly created HTTP call. The second HTTP reply might be speciously gibed through the subsequent HTTP call. Through monitoring the second reply, an aggressor can create a diversity of problem, such by way of duplicating or harming pages on a caching proxy server system. Since the proxy cache is classically common by various users, this variety the effects of spoiling a page or making a spoofed page to gather user information even supplementary overwhelming. For HTTP unbearable to be possible, the application necessity includes unrestricted input as slice of the reply headers directed back to the client. 2.3.4. Path Traversal Path traversal susceptibilities permit a hacker towards admission or regulator files external of the proposed file admission path. Path traversal aggressors are typically accepted available via unrestricted URL response arguments, cookies, and HTTP reply headers. Various Java Web system practice records to preserve an ad-hoc database system and store system properties such by means of pictorial themes, pictures, and so on. Uncertainty an aggressor takes governor completed the requirement of these file path, formerly he might be talented to read or take away records with delicate information or support a rejection of provision aggressors through trying to write to read-only records. Using Java security rules permits the developer to control access to the file system. 3. REVIEW OF STATIC ANALYSIS APPROACHES In this paper, we major deliberate penetration testing and runtime monitoring, two of the utmost normally used methodologies for find vulnerabilities besides physical program reviews. 3.1. Penetration Testing Recent concrete explanations for noticing Web application security issue normally fall into the empire of penetration testing [26, 27, 28, 29, 30]. Penetration testing comprises trying to adventure susceptibilities in a Web system or deafening this one through approaching up with a fixed of suitable venomous response standards [31]. A penetration test can typically expose only a minor illustration of entirely probable security risks in a structure without recognizing the slices of the structure that need not remain tolerably tested. Usually, around remain not at all principles that designate which checks towards course and which responses towards attempt. In utmost cases this methodology is not active and significant program awareness is desirable to find application-level security faults successfully. 3.2. Runtime Monitoring A diversity of together shareware and marketable dynamic observing implements for assessing Web system safety are accessible. Interrupt Proxies HTTP and HTTPS information among the master and the slave, consequently that information, together with cookies and method attribute, can remain inspected and changed, and resubmitted to the system [32, 33]. Commercial application level firewalls existing from Watch-fire, Imperia and other companies yield this idea further through generating a classical of valid exchanges among the user and the application and caution around infringements of this classical. Specific application level firewalls are established on signatures that protector beside recognized kinds of aggressor. The whitelisting methodology identifies whatever the usable inputs are; nevertheless, preserving the instructions for whitelisting is challenging. In distinction, our practice can avoid security faults before they need a casual to obvious themselves. 3.3. Static Analysis Methodologies A respectable impression of static analysis methodologies applied to security issue is delivered in [34]. Simple lexical methodologies active through perusing tools practice a set of predefined patterns to recognize possibly hazardous parts of a code [35]. A limited developments exercise track complex analysis towards discovery mistakes happening C and C++ code [16, 17]. Although talented of addressing defile-style issue, these tools trust on an unreliable methodology to indicators and might consequently slip certain faults. The Commercial project practices collective unreliable static and dynamic analysis in the situation of analyzing PHP code [36]. The Commercial development has positively been practical towards discovery several SQL shot and XSS susceptibilities in PHP program. An analysis methodology that practices type qualifiers has remained established successful in find security faults in C for issue of noticing format string destructions and user bugs [37, 2]. Context sensitivity suggestively decreases the percentage of false positives met with this practice; nevertheless, it is uncertain in what way accessible the context-sensitive methodology. Static study earnings continued accurate towards analyzing SQL instruction created in Java code that might prime to SQL shot susceptibilities [38, 39]. That work examines strings that describe SQL instruction to verify for likely classification demolitions and repetitions. This methodology accepts that a flow graph demonstrating how string standards can broadcast by the code has been created a priori from shows-to analysis outcomes. Nevertheless, since precise pointer data is essential to concept a precise flow graph, it is indistinct whether this practice can accomplish the scalability and precision desired to notice faults in huge systems. 4. METHODOLOGY In this section, we present a static study that discourses the defiled object spread issue. 4.1. Defiled Object Propagation We start through describing the terminology that was casually presented in Example 1. We designate an admission path by means of a direction of region admissions, collection catalogue processes, or process desires separate by points. Used for occurrence, the outcome of spread over admission path a.p to variable v is v.a.p. We characterize the blank admission path by ε; array indexing actions are designated by [ ]. A defiled object propagation involves of a set of source signifiers, sink signifiers, and derivation signifiers: Source signifiers of the method m, n, p stipulate techniques in which customer provided information can reach the code. They contain of a source technique m, argument number n and an admission path p to be applied to argument n to increase the customer provided response. We practice argument number -1 to indicate the reappearance outcome of a method call. Sink signifiers of the form m, n, p identify insecure ways in which information might be used in the code. They contain of a sink method m, argument number n, and an admission path p realistic to that argument. Origin signifiers method m, ns, ps, nd, pd recognize in what way information spreads between objects in the code. They contain of a origin method m, a basis object quantified through argument number ns and admission path ps, and a finish opinion object decided through argument number nd and admission path pd. These origin signifiers decides that a call to method m, the object learnt through put on pd to argument nd is derivative as of the object developed by put on ps to argument ns. In the absence of reappearing objects, towards recognize likely susceptibilities we merely essential towards distinguish uncertainty a basis object is used on a sink. Origin signifiers remain accessible towards grasp the semantics of strings in Java. Meanwhile Strings remain permanent Java objects; string handling observes such as concatenation create diversity new String objects, whose subjects remain originated on the unique String objects. Origin signifiers remain used to decide the conduct of string handling practices, consequently that defile can be perceptibly acknowledged amongst the String objects. Utmost Java code practice constructed in String collections and can portion the usual form of origin signifiers by way of an outcome. Nevertheless, certain Web systems practice various Strings encrypting such as Unicode, UTF-8, and URL encrypting. If encrypting and de-encrypting practices spread fraudulent and are executed using native technique needs or character level string handling, they likewise crucial towards be recognized by means of origin signifiers. Cleansing practices that authenticate input are frequently executed using character-level string manipulation. Subsequently defile does not propagate through such practices; they should not be comprised in the list of derivation. signifiers. It is possible towards avoid the crucial for physical constraint over a static study that controls the association between strings acknowledged keen on and returned through low level string handling practices. Nevertheless, such a study crucial is performed not just on the Java byte code but on entirely the relevant native methods as well. **Example 4:** We can express the issue of noticing parameter meddling aggresse those outcomes in a SQL injection as surveys: the source signifiers for procurement parameters from an HTTP call is: \[ \text{\langle Request.getParameter\(\text{QueryString}\), \(-1, \varepsilon\) \rangle} \] The drop down signifiers for SQL query implementation is: \[ \text{\langle Connection.executeQuery\(\text{QueryString}\), \(1, \varepsilon\) \rangle}. \] Towards permit the exercise of string concatenation in the construction of request strings, we practice origin signifiers: \[ \text{\langle StringBuffer.append\(\text{QueryString}\), \(1, \varepsilon, -1, \varepsilon\) \rangle \text{ and StringBuffer.toString\(\), \(0, \varepsilon, -1, \varepsilon\) \rangle} \] Due to space restrictions, we display only a limited signifiers here; extra information about the signifiers in our experiments. ### 4.2. Specifications Completeness The problem of gaining a comprehensive requirement for a defiled object propagation issue is a significant one. If a requirement is inadequate, significant faults will be unexploited even if we practice a comprehensive analysis that finds all vulnerabilities gining a requirement. Towards create vigorous through a list of source and drop down signifiers for susceptibilities in our investigation, we used the documents of the appropriate J2EE APIs. Subsequently, it is moderately easy to miss pertinent signifiers in the requirement; we used numerous methods to make our problem requirement extra comprehensive. For example, towards discovery certain of the lost source practices, we instrumented the system to discovery places where application code is called through the server. We furthermore used a static study towards identify defiled objects that essential no other objects unoriginal as of them, and inspected techniques addicted to which these objects are decided. In our knowledge, certain of these techniques perverse out to be incomprehensible derivation and drop down techniques lost as of our original necessity, which we consequently inserted. ### 4.3. Static Analysis Our methodology is to use a sound static analysis to find all likely destructions gining a vulnerability requirement specified through its source, drop down, and derivation signifiers. To find security infringements statically, it is essential to identify what objects these signifiers might denote to, a universal issue recognized as pointer or shows-to analysis. #### 4.3.1. Role of Shows-to Information To illustrate the need for shows-to information, we deliberate the task of reviewing a portion of Java code for SQL injections affected by parameter meddling. **Example 5:** Now the code below, string argument is defiled by means of it is returned from a source method get argument. So is Buffer1, as it is consequent from Parameter in the call to append. Finally, string Query is delivered to drop down method executes Query. ```java String Argument = Request.getParameter("UName"); StringBuffer Buffer1; StringBuffer Buffer2; ... Buffer1.append (Argument); String Query = Buffer2.toString (); Connection.executeQuery(Query); ``` Unless we identify those variables Buffer1 and Buffer2 might never refer to the similar object, we would need to predictably accept that they might. Subsequently Buffer1 is defiled; variable request might similarly mention to a defiled object. Consequently a conventional instrument that wants supplementary information about pointers will flag the request to executeQuery as possibly unsafe. An unrestrained number of objects might be dispersed through the code at dynamic, so, to execute a restricted response, the hand analysis statically approaches dynamic program objects with a limited set of static object “UName”. A shared guesstimate method is towards name an object through its distribution locates, which is the line of code that allocates the object. #### 4.3.2. Finding Infringements Statically Shows-to information allows us to find security infringements statically. Shows-to analysis outcomes are characterized as the relative showsto \((v, a)\), where \(v\) is a code variable and \(a\) is distribution place in the program. A static safety infringement is a sequence of heap distribution places \(a_1 \ldots a_k\) such that There present a variable \(v_1\) such that showsto \((v_1, a_1)\), where \(v_1\) matches to admission path \(p\) practical towards argument \(n\) of a call to method \(m\) for a basis signifier \(\langle m, n, p\rangle\). There existing a variable \(v_k\) such that showsto \((v_k, a_k)\), where \(v_k\) matches to applying admission path \(p\) to argument \(n\) in a call to method \(m\) for a drop down signifier \(\langle m, n, p\rangle\). \[\text{showsto}(v_i, a_i) \land \text{showsto}(v_{i+1}, a_{i+1})\] \[1 \leq i < k\] Where variable vi matches to put on ps to argument ns and vi+1 matches applying pd to argument nd in a call to method m aimed at a derivation signifier om, ns, ps, nd, pd. Our static study is created on context sensitive Java shows-to-analysis established by Whaley and Lam [15]. Meanwhile Java supports dynamic packing and classes can be vigorously formed on the fly and called thoughtfully, we can discover susceptibilities only in the code available to the static study. For considerate needs, we practice a simple analysis that handles mutual uses of reflection to growth the scope of the analyzed call graph [40]. 4.3.3. Role of Pointer Study Precision Pointer analysis takes been the theme of much compiler investigation above the last two decades. Since defining what heap objects a specified program variable might show to throughout program execution is unwanted, sound analyses compute conventional estimates of the resolution. Earlier shows-to methods classically trade scalability for precision, ranging from extremely scalable but inaccurate techniques [39] to precise methodologies that need not been exposed to scale [39]. In the absence of precise information around indicators, a sound instrument would achieve that numerous objects are defiled and henceforth report various false positives. Consequently, various practical instructions practice an unsound method to pointers, assuming that pointers are aliased unless established otherwise [16, 17]. Such a method, nevertheless, might miss significant susceptibilities. Having accurate shows-to information can meaningfully reduction the number of false positives. Context sensitivity denotes towards the capability of a study to remember information from diverse call contexts of a method discrete and is known towards be a vigorous feature causal to precision. Example 6: The class Datum items as per a wrapper designed for a URL string. The code constructs two Datum objects and requests getUrl on both objects. A context insensitive study would association information for needs of getUrl. The position this, which is deliberated to be argument 0 of the request, shows to the object, so this.url shows to whichever the object returned or "http://localhost/". As a consequence, together s1 and s2 resolve be restrained defiled if we faith on context impervious shows-to methods. With a context based sensitive study, nevertheless, only s2 will be measured defiled. While numerous shows-to study methodologies be contemporaneous, until freshly, we did not have a climbable study that stretches a conservative yet precise answer. The context based sensitive, presence based shows-to study by Whaley and Lam is together exact and accessible [15]. Class Datum { String url; Datum (String url) {this.url = url; } String getUrl () {return this.url; } String passedUrl = request.getParameter("..."); Datum ds1 = new Datum (passedUrl); String localUrl = "http://localhost/"; Datum ds2 = new Datum (localUrl); String s1 = ds1.getUrl (); String s2= ds2.getUrl (); 4.4. Controlling of Containers Containers such as hash maps, vectors, lists, and others are a mutual basis of inaccuracy in the advanced indicator analysis algorithm. An imprecision owes towards the situation that objects are regularly deposited in a data structure assigned inside the container class definition. As a consequence, the analysis cannot statically differentiate among objects stored in diverse containers. Example 7: The abbreviated vector class assigns an array called table and vectors v1 and v2 portion that array. As a consequence, the original analysis will achieve that the String object referred to through s2 reclaimed from vector v2 might be the comparable as the String object s1 placed in vector v1. The code constructs two Datum objects and requests getUrl on both objects. A context insensitive study would association information for needs of getUrl. The position this, which is deliberated to be argument 0 of the request, shows to the object, so this.url shows to whichever the object returned or "http://localhost/". As a consequence, together s1 and s2 resolve be restrained defiled if we faith on context impervious shows-to methods. With a context based sensitive study, nevertheless, only s2 will be measured defiled. While numerous shows-to study methodologies be contemporaneous, until freshly, we did not have a climbable study that stretches a conservative yet precise answer. The context based sensitive, presence based shows-to study by Whaley and Lam is together exact and accessible [15]. Class Datum { String url; Datum (String url) {this.url = url; } String getUrl () {return this.url; } String passedUrl = request.getParameter("..."); Datum ds1 = new Datum (passedUrl); String localUrl = "http://localhost/"; Datum ds2 = new Datum (localUrl); String s1 = ds1.getUrl (); String s2= ds2.getUrl (); 4.5. Handling of String Routines Extra established of approaches that wants better object identification is Java string handling practices. Methodologies such as String.toUpperCase () assign String objects that are consequently returned. Through the default object identification construction, entirely the distributed strings are measured defiled if such a technique is always raised on a defiled string. We ease this issue by giving distinctive names to outcomes returned by string manipulation practices at dissimilar request sites. We presently put on this object identification upgrading to Java standard collections only. 5. STATIC ANALYSIS CONSEQUENCES In this paper we summarize the experiments we executed and designated the security infringements we originate. We twitch out by describing some demonstrative vulnerability originate by our analysis, and analyze the influence of analysis features on precision. 5.1. Weaknesses Find The static study nominated in this paper reports certain prospective safety infringements in our standards, out of which certain turn out to be safety errors, while others are false positives. Furthermore, apart from errors in web-goat and HTTP excruciating weakness in snips-nap [40], none of these safety errors had been described past. 5.1.1. Endorsing the Faults Initiate Not all safety faults initiate by static analysis or program reviews are fundamentally exploitable in practice. The fault might not resemble to a path that can be reserved dynamically, or it might not be probable to build expressive poisonous input. Works might also be ruled out since of the specific configuration of the application, but configurations might modify over period, possibly assembly works probable. For example, a SQL shot that might not effort on one database might become workable when the system is organized with a database application system that does not perform adequate response inspection. Moreover, virtually all static errors we originate can be fixed easily by altering certain appearances of Java basis program, so around is normally no motive not to answer them in exercise. Once we ran our analysis, we substantially inspected entirely the errors designated towards make certain they describe safety errors. Since our awareness of the applications was not appropriate to determine that the faults we originate were workable, to expansion supplementary assurance, we described the faults to program maintainers. We only designated towards system maintainers only those faults originate in the system program rather than universal collections finished which the maintainer had no control. Practically all faults we designated to program maintainers were confirmed, resulting in more than a dozen program fixes. Since web-goat is an artificial application deliberate to comprise bugs, we did not report the faults we originate in it. Instead, we dynamically established certain of the statically noticed faults by running. Without examining establishes, our analysis might not escalate that a code has verified its response; consequently certain of the designated susceptibilities might opportunity out to be false positives. Nevertheless, our analysis demonstrates entirely the stages intricate in spreading defile from a source to a sink, therefore authorizing the customer towards authenticate if the susceptibilities create are exploitable. Various Web based applications execute certain form of input inspection. Nevertheless, as in the situation of the vulnerabilities we originate in snips-nap, it is common that some instructions are unexploited. It remains amazing that our study fixed not produce any false signs due to the absence of establish analysis; even nevertheless several of the systems we examine comprise verify on customer response. Security faults in blojsom identified by our analysis justify distinct reference. The customer provided response was in situation decorative, but the endorsement commands were too lax, leaving room for exploits. Subsequently the cleansing routine in blossom was applied using string operations as different to straight character manipulation; our analysis identified the movement of defile from the practice’s input to its output. To demonstrate the vulnerability to the application maintainer, we generated a work that avoided all the instructions in the authentication predictable, thus creating path traversal vulnerabilities imaginable. 5.1.2. Organization of Faults This subdivision offering an organization of all the faults we originate as presented in Figure 2. It necessity be illustrous that the number of foundations and basins for entirely of these system is moderately big, which suggests that safety studying these system is period penetrating, since the time a physical safety code review pays is unevenly comparative to the number of origin and sinks that vital to be measured. General, parameter manipulation was the utmost common practice to inject poisonous information and HTTP splitting was the utmost widespread exploitation method. Various HTTP excruciating susceptibilities are due to an insecure coding phrase where the system transmits the customer’s browser to a page whose URL is customer providing by way of the subsequent instance displays: ![Figure 2: Organization Faults.](image-url) Utmost of the vulnerabilities we find are in application program as disparate to libraries. Though faults in application programs might outcome from modest programming errors through by developer unaware of security problems, one would expect library code to normally be improved verified and more protected. Errors in collections expose entirely system by the collection to violence. Now spite of this situation, we have tolerated to discovery two violence directions in libraries: one in a normally used Java collection hibernate and another in the J2EE application. 5.1.3. SQL Injection in hibernate We start by relating a susceptibility direction originate in hibernate, an exposed source object purpose library normally used in Java system as a frivolous back end database application. Hibernate delivers the functionality of exchangeable program information assemblies to disk and heaping them at a future time. It similarly permits system towards examine through the information deposited in hibernate database application. We need to discovery a violence course in code connecting to the inspect functionality in hibernate. The implementation of method Session dot discovery improves objects after hibernate database application by transient its response string argument over a sequence of needs to a SQL perform instruction. As a consequence, all requests of Session dot find with insecure information, such as the two faults we originate in personal blog, might hurt from SQL injections as presented in Figure 3. An insufficient other community methodologies such as duplication and remove likewise accidental out to be attack courses. Our finding highlight the significance of safeguarding normally used software works in instruction to safeguard their customers. 5.1.4. XSS Tracing Attacks Analysis of numerous other systems exposed an earlier strange susceptibility in core J2EE collections, which are used through thousands of Java system. This vulnerability relates to the TRACE routine identified in the HTTP practice. TRACE is used to repeat the substances of an HTTP call back to the customer for correcting resolutions. Nevertheless, the substances of user-provided headers are directed back exact, thus allowing cross-site scripting aggressor. In circumstance, this difference of cross-site scripting affected by vulnerability in HTTP protocol requirement was find earlier, while the circumstance that it existed in J2EE was not earlier declared. Meanwhile this behavior is computed through the HTTP protocol, nearby is no relaxed method to injection this issue at the source level. Widespread endorsements for evading XSS tracing comprise restricting TRACE functionality on the server or restricting client side scripting. 5.2. Analysis Features and False Positives The variety of our analysis that services together context sensitivity and better object naming, accomplishes exact precise outcomes, as restrained by the number of false positives. Towards analyze the implication of every analysis article; we scrutinized the number of false positives as healthy as the number of defiled objects designated through every change of the analysis. Just like false positives, defiled objects distribute an appreciated metric for analysis precision: as the analysis develops extra precise, the number of objects thought to be defiled reductions. Context based sensitivity combined with healthier object naming accomplishes a very low number of false positives. For snips-nap, the number of false positives was focused extra than associated to the context insensitive analysis diversity with no naming improvements. Correspondingly, not including the small code j-board, the utmost precise variety on normal described less defiled objects than the smallest precise. Toward reach a low false-positive quantity, individually context based sensitivity and healthier excellence object naming are vital. The number of false positives remains excessive for utmost code when lone one of these analysis features is used. One method to understand the significance of context sensitivity is that the correct assortment of object name in pointer analysis permits context sensitivity to harvest precise outcomes. Although it is extensively predictable in the compiler community that distinct handling of containers is essential for precision, better-quality object naming unaccompanied is not normally adequate to entirely disregard the false positives. The false positives described through the utmost precise variety for our analysis were situated in snips-nap and were affected by inadequate precision of the default distribution site-based object-naming structure. The default naming affected a distribution location in snips-nap to be predictably measured defiled since a defiled object might spread to that distribution location. The distribution location in question is located within String Writer. To String (), a JDK purpose comparable to String.toUpperCase () that yields a defiled String lone if the original String Writer is built from a defiled string. Our analysis predictably determined that the reappearance outcome of this technique might be defiled, affecting a vulnerability to be described, where nobody can happen at runtime. We must reference that all the false positives in snips-nap are removed by generating a new object name at each request to, String Writer .To String (), which is accomplished with a one-line alteration to the pointer analysis requirement. 6 CONCLUSIONS In this paper we presented in what way a universal class of safety faults in Java system can be interconnected as occurrences of the worldwide defiled object spread issue, which comprises discovery all sink objects derivable after basis objects via a usual of certain origin instructions. We established a precise and accessible analysis for this issue originated on a precise context sensitive pointer alias analysis and proclaimed delays to the controller of strings and containers to additional progress the precision. Our methodology catches all vulnerabilities identical to the requirement within the statically analyzed program. Note, nevertheless, that faults might be missed if the user-provided requirement is imperfect. We spoke a diversity of extensive susceptibilities comprising HTTP excruciating aggressor, SQL shots, cross site scripting (XSS), and extra categories of susceptibilities as defiled object spread issue. Our investigational consequences presented that our analysis is an active practical instrument for find security vulnerabilities. Utmost of the safety faults we designated were established as exploitable susceptibilities through their maintainers, subsequent in extra than a dozen code determinations. REFERENCES 2002. 2004. Proceedings of the ACM Conference on Object-Oriented Programming, Systems, Languages, and Applications Proceedings of the ACM SIG-PLAN 2004 conference on Programming Language Design and Implementation, [16] W. R. Bush. A static analyzer for finding dynamic programming errors. Software - Practice and Experience (SPE), Proceedings of the ACM SIG-PLAN 2002 Conference on Programming Language Design and Implementation, 38.html, 2013.
{"Source-Url": "http://ijarcsse.com/Before_August_2017/docs/papers/Volume_3/9_September2013/V3I9-0349.pdf", "len_cl100k_base": 8942, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43558, "total-output-tokens": 11459, "length": "2e13", "weborganizer": {"__label__adult": 0.0004112720489501953, "__label__art_design": 0.00033855438232421875, "__label__crime_law": 0.0013408660888671875, "__label__education_jobs": 0.000507354736328125, "__label__entertainment": 6.300210952758789e-05, "__label__fashion_beauty": 0.00016188621520996094, "__label__finance_business": 0.00019121170043945312, "__label__food_dining": 0.0002777576446533203, "__label__games": 0.0008711814880371094, "__label__hardware": 0.0013408660888671875, "__label__health": 0.0005025863647460938, "__label__history": 0.00021719932556152344, "__label__home_hobbies": 0.00010269880294799803, "__label__industrial": 0.0003991127014160156, "__label__literature": 0.00022327899932861328, "__label__politics": 0.00023674964904785156, "__label__religion": 0.0003418922424316406, "__label__science_tech": 0.0263214111328125, "__label__social_life": 7.641315460205078e-05, "__label__software": 0.01186370849609375, "__label__software_dev": 0.95361328125, "__label__sports_fitness": 0.0002732276916503906, "__label__transportation": 0.00035953521728515625, "__label__travel": 0.00015652179718017578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53151, 0.04778]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53151, 0.62539]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53151, 0.8833]], "google_gemma-3-12b-it_contains_pii": [[0, 4163, false], [4163, 8966, null], [8966, 13315, null], [13315, 18570, null], [18570, 24813, null], [24813, 30005, null], [30005, 35224, null], [35224, 40632, null], [40632, 45475, null], [45475, 50491, null], [50491, 53151, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4163, true], [4163, 8966, null], [8966, 13315, null], [13315, 18570, null], [18570, 24813, null], [24813, 30005, null], [30005, 35224, null], [35224, 40632, null], [40632, 45475, null], [45475, 50491, null], [50491, 53151, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53151, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53151, null]], "pdf_page_numbers": [[0, 4163, 1], [4163, 8966, 2], [8966, 13315, 3], [13315, 18570, 4], [18570, 24813, 5], [24813, 30005, 6], [30005, 35224, 7], [35224, 40632, 8], [40632, 45475, 9], [45475, 50491, 10], [50491, 53151, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53151, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
6ceceaedc375fb833ae6d026e80bf648b2bdb3e8
Abstract—Passive learning techniques infer graph models on the behavior of a system from large trace logs. The research community has been dedicating great effort in making passive learning techniques more scalable and ready to use by industry. However, there is still a lack of empirical knowledge on the usefulness and applicability of such techniques in large scale real systems. To that aim, we conducted action research over nine months in a large payment company. Throughout this period, we iteratively applied passive learning techniques with the goal of revealing useful information to the development team. In each iteration, we discussed the findings and challenges to the expert developer of the company, and we improved our tools accordingly. In this paper, we present evidence that passive learning can indeed support development teams, a set of lessons we learned during our experience, a proposed guide to facilitate its adoption, and current research challenges. Keywords—passive learning, experience report, dfasat. I. INTRODUCTION The use of log data to analyze the real behavior of a software system in production can provide useful insights to software development teams, such as conformance checking or anomaly detection. However, performing such analysis on a large scale can be challenging: the log entries that are of interest (i.e., log entries pointing towards some anomaly in the system) may be hidden among all the logs that are of less interest. Even if one finds a log entry pointing towards an anomaly, it might still be unclear how often this anomaly occurs, and whether there are related problems. Clearly, learning the behavior of a system by analyzing each execution trace by hand is simply impossible in large systems. Thus, the use of an automated approach becomes a necessity. The goal of passive learning techniques is to infer graph models on the behavior of a system from large trace logs [34]. Such graph models could then be inspected for different reasons: model checking, error finding, etcetera. Researchers have been working on approaches that would enable passive learning to be used even in large scale, such as selecting representative subsets of log data from large log files [9], and reducing the size of the generated graph by merging similar states [6], [5], [14]. Indeed, a common belief is that these techniques can help companies to identify, among others, whether the behavior of a system in production is correct, or whether the behavior in a test environment matches what happens in production. However, there is a lack of empirical knowledge on how such techniques would behave inside the software development life cycle of a large company and how successful such techniques would be in finding errors. To that aim, we conducted a research project at a large company in the payment industry. Adyen is a technology company that provides businesses with a single solution to accept payments anywhere in the world. The only provider of a modern end-to-end infrastructure connecting directly to more than 250 payment methods, the company delivers frictionless payments across online, mobile, and in-store channels. With offices all around the world, the company serves more than 4,500 businesses [2]. Each payment produces log entries in multiple different systems. Clearly, to process this total value of consumer transactions, this results in a huge amount of log data. We exclusively focus on logs from the point-of-sale (POS) solution, which is developed in-house. In a nutshell, the solution consists of an embedded device (hardware manufactured by another vendor) used in-store by merchants to safely collect the shopper’s credit card and to perform the transaction together with the credit card scheme. The logs from these devices indicate what happened on the device during a transaction; they are submitted to Adyen’s servers after the transaction is completed. In general, those logs consist of 15 to 25 lines that all contain a timestamp and an event. Examples of information that developers usually extract from these logs are “why a transaction was refused for a certain merchant”, and “why some merchant performs many cancellations in a day”. We spent nine months performing action research [25], [10] and introducing passive learning techniques in a large-scale system. In this paper, we present our experience report. More specifically, we first present five real examples where passive learning was crucial in providing the team with errors and useful insights. Thereafter, we present the lessons that we learned throughout this period. We then finish the paper by providing a guide for companies to apply passive learning. The main contributions of this paper are: 1) Concrete evidence of the usefulness of passive learning techniques in a large industry system (Section IV). 2) Lessons learned derived after nine months of action research focused on applying passive learning in a company (Section VI). 3) A guide to facilitate the adoption of passive learning by other companies (Section VII). 4) A list of research challenges that should be tackled by researchers (Section VIII). II. BACKGROUND: PASSIVE LEARNING Passive learning tools infer state machines from log data [34]. From an input sample of event sequences (logs), they construct an automaton model (state machine) that can produce those sequences. One of the main challenges faced by such techniques is to produce the smallest possible state machine. However, the problem is NP-hard [13], and inapproximable [22]. Thus, several techniques have been proposed for solving it in practice. The techniques that we use in this research make use of two different techniques: a greedy state-merging method [21], [15] and the k-tails algorithm [6]. State-merging methods start by representing the input sample as a large tree-shaped automaton, called a prefix tree. Every state in this model corresponds to a unique input prefix from the input sample. They then iteratively combine (merge) state pairs by moving all input and output transitions from the second state to the first, and subsequently merging the targets of non-deterministic choices. Only consistent states are merged. When the data is labeled, consistency means that the resulting automaton still assigns the correct label to every sequence from the input sample as in the Blue-Fringe algorithm [15]. When the data is unlabeled, the consistency check is typically replaced with a statistical test whether the probabilities of future event sequences are similar in every pair of merged states as in ALERGIA [8]. State-merging ends when no more consistent merges can be performed. The result is a small automaton that is consistent with the input sample. The k-tails algorithm also looks at states and future sequences, but only up to a given depth k. Thus, for k = 1, it requires only that the immediate events occurring after the combined (merged) states result in the same label (or have similar distributions). For k = 2, this holds for futures of length 2, et cetera. The obvious advantage of limiting the consistency test to fixed length futures is that the learning problem can be solved more quickly. The pioneering work that introduced this algorithm even contained a formulation of the problem in constraint programming [6], solving the problem exactly for small k. Decades later a similar formulation was proposed for the full problem (with k = ∞) as a satisfiability problem [14]. The disadvantage of using a small k however is that the resulting automaton can be an overgeneralization of the (software) process that generated the data. We briefly list our selection of three techniques that come with open source implementations: Synoptic, InvariMint, and DFASAT. There are more tools available that do similar jobs, but that do not fit our datasets. For instance, CSight (short for concurrent insight) [4] analyzes logs from a distributed system. As we do not deal with distributed systems, in the sense that all logs are individual and sequential, we do not discuss that tool. Walkinshaw et al. [35] introduce MINT (or EFSTM(Inference)Tool). MINT considers data incorporated in execution traces during inference, as events might be related to certain measurements. The events in the transaction logs in our dataset do not incorporate such data values, thus we do not investigate this tool. Ohmann et al. [20] do something similar for resource usages in Perfume, such as memory usage or execution times. However, due to the nature of the company, we simply cannot make use of any web-based tool for analyzing their log data. Synoptic. Beschastnikh et al. [5] present Synoptic, a tool that aims to make state machine inference from log files easy, mainly for system debugging purposes. The tool starts by parsing the execution traces out of those files by using (user-specified) regular expressions that indicate the format of the log; this results in a trace graph. After mining invariants from this graph, it first merges all nodes, and then splits them again so that the newly obtained graph satisfies the invariants. Thereafter, it tries to find the smallest possible automaton by again merging nodes until the invariants are violated. Synoptic models the log events as the nodes of this automaton. Although slightly unusual for state machines, this makes little difference from a representational point-of-view since the log labels can occur in multiple states. The learning algorithm is non-traditional in the sense that it learns from invariants instead of the data sample directly. These invariants are geared towards finding patterns that occur frequently in software development. InvariMint. The authors of Synoptic also created InvariMint [3]. InvariMint aims at improving understandability of inference algorithms, by describing an approach to model inference algorithms declaratively. Other than Synoptic, InvariMint models log events on edges, and the nodes are empty (a model with hidden state). It ships with a few different algorithms following the declarative approach, among which Synoptic’s algorithm and k-tails [6]. DFASAT. DFASAT is a novel tool based on the work from Heule and Verwer [14]. In its core lies a greedy merging algorithm, based on Blue-Fringe [15]. DFASAT takes traces (fixed format) as input, which contain the different events as well as the trace type (i.e., accepting or rejecting). It outputs an edge-labeled automaton, just like InvariMint. A key technique used by DFASAT is hiding infrequent paths (i.e., ignoring rare traces and directing them to sink nodes). In contrast, Synoptic and InvariMint use and show all flows in the automaton. In our experience, hiding these paths results in easier to understand models. One key property of DFASAT is its flexibility to model different heuristics and model types. We used the overlap driven heuristic, which is based on ALERGIA [8] and was used in DFASAT during the Stamina competition [34]. III. RESEARCH METHODOLOGY The goal of this study is to evaluate the application of passive learning techniques in a large system and use this knowledge to support both companies that would benefit from such techniques and to provide researchers with a future research agenda on the topic. To that aim, we spent nine months introducing passive learning techniques, learning from the results and discussing with Adyen’s expert developers, and re-iterating. We position this study as action research [25], [10]. According to Reason and Bradbury [25], action research is a participatory and democratic process that seeks to bring together action and reflection, theory and practice, in participation with others, in the pursuit of practical solutions. During nine months, we performed several work iterations. In each iteration, we discussed possible improvements to the use of passive learning at the company with the expert developer, worked on the improvements, and analyzed the new results. In many occasions, different members of the software development team (composed by 20 developers) were also involved in the discussion. We adapted Lewin [16]’s three steps on how to perform action research, which we describe in the following: 1) **Planning.** We had a weekly meeting with the company’s expert developer. The expert developer has 15 years of experience as a software developer in this field. These meetings were divided in two phases. In the first phase, we presented the results of our previous interaction, both from the company’s point of view (i.e. presenting what we found on their data) and from the research’s point of view (i.e. presenting what we learned on passive learning). Commonly, we presented large printed graphs generated by the tools and asked for the expert to help us interpret it. Thereafter, we discussed the next steps. The next steps were also discussed from the two perspectives. From the company’s perspective, in many cases, we focused on better understanding a possible problem found; from the research’s perspective, we focused on how to make the techniques better and more accurate. 2) **Action.** Most of the times, executing what was planned meant improving or tuning the existing passive learning tools. In particular, after a few experiments, we chose DFASAT as the tool to customize. All our changes and improvements are available as open source [36]. 3) **Results.** In our case, results mean revealing some new useful information to the development team; something that was hidden in the log data before. In many cases, the developers responsible for the system actually deeper investigated our finding into their source code bases, and improved their systems accordingly. We made sure to take notes about our meetings with the expert as well as about all our lessons learned. At the end of our journey, we grouped everything we learned and observed into the five main sections of this paper: in Section IV, we present the real cases in which passive learning provided new information to the team; in Section V, we present the improvements we made to one of the current state-of-art passive learning tools; in Section VI, we discuss the lessons we learned while applying passive learning; in Section VII, we present a guide to facilitate the adoption of passive learning; and finally, in Section VIII, we discuss research challenges that should be tackled by researchers. **IV. EVIDENCE OF THE USEFULNESS OF PASSIVE LEARNING** In this section, we present five different real case examples in which the use of passive learning was instrumental in revealing useful information to the software development team: 1) Discovering a bug in the testing environment (Section IV-A) 2) Finding non-conformant behavior when compared to the official specification (Section IV-B). 3) Revealing undesired behavior in the system (Section IV-C). 4) Comparing the same state machine in different contexts, e.g., payment over different card brands (Section IV-D). 5) Identifying slow transitions in the system (Section IV-E). A. **Unexpected Behavior Caught In A Testing Environment** We applied passive learning on logs from the testing process, where a new firmware version was being tested. Specifically, these tests focused on swipe transactions, where PIN entry (as opposed to putting a signature) is required. The resulting graph is shown in Figure 1. From this graph, one of our test automation engineers was able to discover a bug that he did not detect earlier: of the 11 transactions that reach PIN_ENTERED, only 10 transactions continued to PRINT_RECEIPT. One transaction proceeds with ASK_SIGNATURE instead; this is indicated by the red arrow in the graph. This should not have happened for this selection of test cases. **Impact.** One of the developers confirmed that this was indeed a bug in the firmware of the terminal. The issue was fixed in the next firmware version. **B. (Non-)conformance with the Specification** The company expert wanted to verify the behavior of the system against a specification. To investigate the usability of passive learning for this, we took the EMV specification [11] (developed by EMVCo, a payment authority), and compared the inferred graph to the related part of the specification. Essentially, this specification consists of a list of steps the system needs to execute sequentially. During the (manual) comparison, we noticed a different order of events in the logs, i.e., two steps were switched. Provided that the desired order of events is known, this is easy to see in the graph: if the correct order would be first A, then B, there are edges containing B before A instead. We learned from the specification that this was actually allowed, so in this case, this was not a bug. However, it illustrates how passive learning could be used for conformance checking. One might argue that this could also be determined using one log file. However, passive learning could help to verify that the order is correct in all log files. An important remark here is that, unfortunately, not all eleven steps are listed in the logs. Thus, in order to truly verify a correct order of events, those logs need to be adjusted. **Impact.** Based on the available information, we concluded that the software indeed matches the specification (i.e., all steps are performed in an allowed order). Thus, there was no impact, other than increased confidence in the system. **C. Revealing Undesired Behavior** Certain calls to the online platform should never fail. However, when we inferred a model from production logs of one merchant, the company expert immediately learned that some terminals sometimes fail to make such important calls to the platform. This can be identified by analyzing the flow in the graph, as this will result mainly in declined or cancelled transactions at some point. The relevant cut-out of the graph is shown in Figure 2. Note the blue color of the node that follows `validate_-1`, which shows that from there, most transactions end up in a cancelled state, which in itself can also be an indicator of misbehavior. **Impact.** In a later firmware version, a retry mechanism was added to lower the probability of `validate_-1` (and other connectivity related issues) – for this newer version, we were able to confirm that the frequency indeed lowered significantly. **D. Behavioral Difference Between Two Different Card Types** We took a large transaction log dataset from one merchant on production with the goal of understanding whether there was a different behavior between two different card brands. We then generated two state machines, one for each card brand. From the two graphs (subgraphs of both are shown in Figure 3), we could almost immediately discover several differences. For example, if we look at the cancel and error rates for this particular part of both graphs, Type B ends up in more than twice as many cancels and errors as type A. There are also behavioral differences (albeit minimalistic ones), as type B shows an edge that does not exist in type A and vice versa. **Impact.** As the company does not have any direct influence on the card brands, this finding did not have any impact on the firmware. However, the information was shared with the merchant(s) to whom this might be of concern, as an explanation for lower authorization rates. **E. Identify Slow Transitions** Some parts of the system can take more time to execute than others. In a state machine, this is represented by a slow transition between two states. Highlighting the edges with a long duration makes it easier to identify which transitions need more time. This is specifically useful to find the time-related bottlenecks in the system. We identified a few bottlenecks after performing more than 200 benchmark tests on one test robot (repeatedly executing two or three happy flow test cases). Figure 4 shows the same graph as Figure 2, with the timings added to the edges. We have discussed earlier that `validate_-1` is undesired, as it is a potential indicator for failed calls to the platform. By just looking at the timings, one might also conclude that `validate_-1` is unwanted, as an average duration of 22.5 seconds for one step of a payment is simply undesired. Ultimately, a payment would be completed as fast as possible, so that the shopper can move on. **Impact.** Those bottlenecks were resolved by the developers in a later version, improving the total time needed for a payment (in some cases, the gain was more than 4 seconds). **V. MODIFICATIONS TO DFASAT** During our 9 month study, we made several modifications to DFASAT that improved its applicability. DFASAT allows the user to create custom heuristics and state machine types by adding a single file to the code and recompiling. We used the overlap driven heuristic as a base class, and added a new heuristic using 150 lines of code, and another 150 for new visualization routines. Our modifications are available as open source, we list the four most influential ones below. --- Fig. 2: Example of revealed undesired behavior in the system. tune the parameter settings. We noticed however that DFASAT study, we almost continuously learned models in order to fine-tune these modifications. The most frequent type. Figures 2, 3, and 4 show the result of color the sinks differently for the different types, making visual distinction easier. Similarly, we color each node according to the corresponding sink node. In the resulting dot file, we state is reached only by traces of a certain type, it is replaced with the final state as its own type. We classified as accepting or as rejecting? We therefore treat each (un)desired. For example, should a final state ‘Cancelled’ be not make sense, as there might be multiple final states that are accepted and rejected. In a system with multiple final state types, this does not distinguish between the two. However, in a system with multiple final state types, this does not make sense, as there might be multiple final states that are not distinguished. For example, should a final state ‘Cancelled’ be classified as accepting or as rejecting? We therefore treat each final state as its own type. Adding trace types. To the best of our knowledge, all tools that infer state machines only have the notion of accepting and rejecting traces (if they even distinguish between the two). However, in a system with multiple final state types, this does not make sense, as there might be multiple final states that are (un)desired. For example, should a final state ‘Cancelled’ be classified as accepting or as rejecting? We therefore treat each final state as its own type. New sinks and colors. Besides extending the consistency checks to take these new types into account, we also implemented new sinks based on the final state types. Whenever a state is reached only by traces of a certain type, it is replaced by the corresponding sink node. In the resulting dot file, we color the sinks differently for the different types, making visual distinction easier. Similarly, we color each node according to the most frequent type. Figures 2, 3, and 4 show the result of these modifications. Additional consistency check. The first months of our study, we almost continuously learned models in order to fine-tune the parameter settings. We noticed however that DFASAT sometimes performed merges that we consider to be wrong. It can create self-loops when a logged event does not influence the future behavior. For instance, given inputs $abcc$ and $acc$, it may conclude that the behavior after $a$ and $ab$ is similar, and merge the states reached by $a$ and $ab$. This introduces a self-loop with label $b$ to the state reached by $a$. Although this may be correct in theory, visually it gives the wrong impression that multiple $b$ events can occur after an $a$ event. We added a check that avoids creating such loops. Adding more data. We added more information to the dot files: relative frequencies and event durations. The relative frequencies are computed using the number of traces on the edges divided by total number of traces in the graph. Using the relative frequency, we modify the width of edges, such that those that occur more often in the data are thicker. Furthermore, we calculate the time needed to take a transition during the preprocessing, by subtracting the timestamp of the previous event from each event (assuming log lines are stored chronologically). We then compute for each edge how long it takes on average and visualize this in the dot file. VI. LESSONS LEARNED In this section, we present four lessons we learned during our nine months: 1. Different tools present different results (Section VI-A). 2. Tools need customization before being applied in real settings (Section VI-B). 3. Take the context into consideration (Section VI-C). 4. Developers want to mostly focus on finding bugs (Section VI-D). Synoptic and InvariMint have this functionality embedded. to process and output the resulting graph in PNG format, as opposed to Java. We experimented with the three tools on four different log datasets, all originating in production terminals of a large merchant: a relatively small set of 15 minutes, a slightly larger set of one hour, a set of one day and a large set of one week. As the company patches software versions relatively regularly, and we focus on one particular software version (to eliminate inconsistencies in logging between the different versions), one week of logs is sufficient, especially for identifying recent issues. Data that is too specific, such as transaction-specific identifiers, transaction amounts and card brands, was stripped from each dataset. The numerical details of the datasets are listed in Table I. We consider each dataset both as full set and as set of unique traces, thus we have eight datasets in total. As the tools infer behavior from the logs, we do not expect to see significant differences between the unique set and the full set (other than the trace counts on the edges), as the full set only holds more traces describing identical behavior. We are curious how introducing more identical traces affects the performance of the tools. We use the default values, except for the following: For Synoptic, we switch the edge labels to display absolute counts. For InvariMint, we pick $k$-tails with $k = 1$ and enable minimization of intersections. For DFASAT, we pick the overlap driven heuristic with non-randomized greedy preprocessing and state count $t = 10$; we disable the use of the SAT solver. Using the various datasets (transformed into a format that the tools can read), we analyze the performance of the three tools. For each data set, we run each tool ten times to eliminate fluctuations. **Runtime performance.** The average runtimes of the ten executions are shown in Table II, which is visualized in Figure 5. We conclude that DFASAT significantly outperforms the other tools in terms of runtime. We conjecture two possibilities for this: on one hand, its heuristic is more efficient and greedy, and on the other hand, DFASAT is implemented in C++ as opposed to Java. All results indicate the time it takes to import the dataset and to process the output. For all tools, this includes the time Graphviz\(^1\) (open source graph visualization software) needs to process and output the resulting graph in PNG format, as Synoptic and InvariMint have this functionality embedded. \(^1\)http://www.graphviz.org ### A. Lesson 1: Different tools present different results As tools allow for custom configurations that are likely to influence their performance and output, we wanted to understand how each tool behaves in different datasets. One important finding is that Synoptic was not able to always complete for the largest datasets (all logs for one day and one week) without running out of memory. However, if we only consider the unique logs, Synoptic is able to produce a model without running out of memory. The other tools always complete for all datasets. **Output complexity.** We restrict ourselves to a few graph characteristics, namely the number of nodes ($N$), edges ($E$) and cycles ($C$) it contains. Furthermore, we compute the cyclomatic complexity ($CC$), a commonly used complexity metric in computer science, as $E - N + 2P$ [19]. In this case $P = 1$, as the graph is always one connected component. Although complexity is not an unambiguous metric, it does allow us to compare the graphs numerically. Furthermore, a higher graph complexity usually makes understanding such graphs more difficult, whereas we want to analyze the graphs by hand. In Table III and in Figure 6, we show a comparison of the number of nodes and edges for each of the datasets from the different tools. The exact numbers listed in Table III also include the number of cycles and the cyclomatic complexities. From this table, we can draw several interesting observations. For instance, all tools have increasing numbers as the dataset gets larger. Synoptic has a large number of nodes and edges, but InvariMint has twice as many cycles. The results from DFASAT are clearly influenced by the uniqueness of a trace, as the numbers are much larger for sets of all traces. We can relate this to the significance parameters: for example, in a unique set, a certain state might occur only once and might therefore not be considered. However, in the related full set, that same state might occur multiple times, so that it actually is being considered and thus influences the graph. **Developers perceptions.** We show the developers one graph from each of the tools (in a varying order), where each graph originates in the full set for one hour, presented in Table I. Based on our experience, one hour of logs is enough for these tools to come up with a graph of reasonable size. Besides introducing the developers to the tools, this allows us to perform comparisons between tools. We asked for their opinion on understandability, possible improvements and suitable purposes for the tools (i.e., for which purpose the tools can be used). We sent the survey to 10 of our developers. Of the ten responders, six work as developers on the payment system itself. Two of them develop a .NET library that allows for integration with the system, one similarly develops an iOS integration, and one is the test automation engineer who tests this particular system. On average, each of them is employed at this company for roughly one year, and seven of them indicate that they make use of the transaction logs we have used here, on a daily basis. We see that DFASAT’s results were considered the easiest ones to be understood by developers. Most of the surveyed developers indicate that Synoptic’s diagrams are too complex/dense. However, they do think it might get more useful when the dataset is split, or when the graphs are used to zoom in on specific areas. For DFASAT, the sinks raise questions, as they are not defined clearly, but the graphs themselves are much easier to understand. These can be clearly seen as points for improvements in the tools. Some other recurring remarks are made about the visualization itself. As the tools rely on Graphviz, the visualization is rather minimalistic, tends to have crossing arrows and does not allow for any interactions (such as collapse/expand, highlighting flows, et cetera). Furthermore, multiple developers suggest the introduction of colors, which can help to visually separate groups of traces. Where we decided to switch to absolute numbers in Synoptic, one of the developers suggested the use of relative numbers (for DFASAT), to get a better understanding how often a certain line is logged. **B. Lesson 2: Tools need customization before being applied in real settings** Based on our experiences, we argue that passive learning can be used for analyzing log data, especially if the system produces a large amount of logs that always follow a similar format. However, our study shows that none of the tools are out-of-the-box ready for large industrial adoption. To reach their full potential, the existing tools probably need to be adjusted to suit the particular use case. This is not necessarily very complicated, but it might require some programming knowledge and some knowledge about the field. Based on our findings, DFASAT is the most promising tool in terms of speed, complexity and understandability. However, it seems to be focused more on academic interests than on an industrial application, making it hard to get the most out of it. On the other hand, Synoptic seems to be the most targeted on industrial applications (it does not require any preprocessing, for example), but for our log data it unfortunately suffers some performance issues, and its outputs become complex as the dataset grows. Regarding data input, Synoptic (and InvariMint) can already be applied on any log format (as long as there is a regular expression that can describe it), but DFASAT requires some preprocessing. For industrial adoption of such a tool this should be integrated, or part of the toolchain. The fact that DFASAT explicitly allows for customization works in two directions: on the one hand, it allowed us to make the graphs more useful. On the other hand, for industrial use, this is probably undesirable. The source code is not yet well documented, nor does the tool provide clear errors if something fails. Someone without background knowledge on this field probably cannot implement this efficiently. Another important remark to be made is that, from a business perspective, in general the “happy” path (i.e., the most common flow in the graph) is the most important. However, from a development perspective, the uncommon paths are even more important, as those might reveal in which cases the system behaves anomalous, and thus may indicate a bug. This distinction should be kept in mind while using the tools. **C. Lesson 3: Take the context into consideration** One can provide a full set of logs to an inference tool to obtain the “full model”. However, this model might not show certain details, as this is then a general overview. For example, if we would take the logs from one application with two different configurations, it might be the case that configuration A has no errors in a certain path, whereas configuration B has many errors on the same path. From this overview graph, one can only conclude that ‘errors occur’ in this path, but by applying contextual splitting, it is possible to expose these details. We define contextual splitting as splitting data from a dataset, based on some values residing in the data or related to the data. The key analysis that this contextual split makes possible is comparing paths between different graphs from the same context. For example, is the error frequency for graph A... similar to the error frequency in graph B? or from certain points in the graph, are the paths similar or is the behavior different? **Implementation.** During the preprocessing, we are already stripping some information that is too specific. However, some of this information is very suitable for contextual splitting. Examples of such information can be different merchants (i.e., companies that use their payment gateway), or different acquirers (i.e., credit card companies or banks). Thus, after selecting the information to split, our preprocessor provides different files to be analyzed by the passive learner. The outcome is basically a (potentially) smaller graph which we can then compare among others. **D. Lesson 4: Developers want to mostly focus on finding bugs** In the same survey, we asked developers to rank the eight purposes below, as we expect that the tools can (and should) be useful from these perspectives. Furthermore, we want to identify the likelihood of developers to use the tools for these purposes. - To understand the system from a high-level view; - To understand a specific part of the system; - To understand the interaction between the user and the system; - To find unexpected paths (i.e., bugs) in the system; - To share knowledge about the system’s behavior among developers; - As documentation about the system; - To compare different versions of the system; - To verify the system against a specification. In Figure 7, we show an overview of the distribution of the listed purposes. From this, we see that there is one clear purpose that comes out as “most likely”: to find unexpected paths (i.e., bugs) in the system. Most of the developers rank ‘understanding the interaction between the user and the system’ and ‘comparing different versions of the system’ as second purposes. According to them, the graphs are apparently not very useful as documentation of the system. As our list is probably not exhaustive, we also asked them to come up with other types of analyses. Their suggestions are varying in the sense that there are no real similarities between responses. We therefore selected three suggestions that differ the most and that we find the most promising: 1) Use these graphs for detecting possible paths that are actually not being taken, i.e., if the system supports specific features, why are those not being used? 2) Relate these graphs to the code paths that are responsible for the different paths. 3) Use this for real-time monitoring of the system, e.g., if the distribution of the taken paths shifts, or if there occurs a new path, this might indicate some anomaly. **TABLE III: Complexity comparison between Synoptic, InvariMint, and DFASAT.** The columns list the number of nodes (N), the number of edges (E), the number of cycles (C) and the cyclomatic complexity (CC). <table> <thead> <tr> <th></th> <th>Synoptic</th> <th></th> <th>InvariMint</th> <th></th> <th>DFASAT</th> </tr> </thead> <tbody> <tr> <td></td> <td>N E C CC</td> <td></td> <td>N E C CC</td> <td></td> <td>N E C CC</td> </tr> <tr> <td>15 minutes</td> <td>all logs</td> <td>35 45 0 12</td> <td>24 35 0 13</td> <td>27 30 0 5</td> <td></td> </tr> <tr> <td></td> <td>unique logs</td> <td>35 45 0 12</td> <td>24 35 0 13</td> <td>4 5 2 3</td> <td></td> </tr> <tr> <td>one hour</td> <td>all logs</td> <td>72 103 0 33</td> <td>31 62 2 33</td> <td>37 42 5 7</td> <td></td> </tr> <tr> <td></td> <td>unique logs</td> <td>72 103 0 33</td> <td>31 62 2 33</td> <td>16 17 2 3</td> <td></td> </tr> <tr> <td>one day</td> <td>all logs</td> <td>130 223 8 95</td> <td>42 119 9 79</td> <td>20 20 0 2</td> <td></td> </tr> <tr> <td></td> <td>unique logs</td> <td>130 223 8 95</td> <td>42 119 9 79</td> <td>30 91 6 23</td> <td></td> </tr> <tr> <td>one week</td> <td>all logs</td> <td>225 415 14 192</td> <td>51 179 28 130</td> <td>93 142 23 51</td> <td></td> </tr> <tr> <td></td> <td>unique logs</td> <td>225 415 14 192</td> <td>51 179 28 130</td> <td>34 49 19 17</td> <td></td> </tr> </tbody> </table> **VII. A GUIDE TO ADOPT PASSIVE LEARNING** We applied the passive learning tools on the logs of a single company. The results were positive enough that we conjecture that these tools and techniques can as well be applied in any other domain, as long as the system and its logs possess several necessary characteristics and it is possible to follow certain strategies: **The system should provide the full state of a single operation.** In our system, each set of log lines (i.e., every 20 lines) represents a single transaction. Thus, we are able to see the details of a specific transaction, from its beginning to its end. To make these passive learning tools work, it is important that each log file at least follows a similar order of events. If the logs are not sequential nor consistent, the graphs might be complex or even useless for analysis. **Manipulate/Transform the logs.** Some of the tools require a specific format, whereas others can read logs in any format. Nonetheless, be ready to apply some transformations on the input logs to improve the results of the tools. Strip information that is too specific, such as identifiers. Note that some of the information that gets stripped might be useful for splitting by context; use this to compare different graphs in the same way. context. If the logs originate in different software versions, splitting by version can be a good starting point. **Identify the different final states in the system.** In general, passive learning tools only support accepting and (in some cases) rejecting traces. However, as we have seen, real systems can have more than just these two state types. Taking all different types into account adds significant value to the graphs. However, this will probably require some changes in the selected tool – depending on the tool, this can be relatively easy or difficult. **Pick the tool that fits the logs and suits the purpose.** There are many different tools that all infer in a slightly different way and that produce different graphs. For example, if the number of unique events is small, Synoptic might produce useful graphs. Or when the logs incorporate measurements, MINT [35] could be a good fit. For our logs, DFASAT was the best fit due to its extendability and its fast performance. **Be ready to implement new features in the tools.** Unfortunately, there are no out-of-the-box tools that work immediately for any case. DFASAT is even designed to require some adjustments in order to produce the best results. Furthermore, keep in mind that most of the tools are developed in an academic setting, and thus in general lack essential documentation. **VIII. RESEARCH CHALLENGES** During our study, we encountered many points where passive learning tools should be improved in order to be adopted by industry. Out-of-the-box, passive learning tools provide the means to learn models from data, but developers find it often hard to interpret and use the learned models. Models with smaller graph visualizations are easier to understand. For instance, DFASAT’s use of sink states is appreciated by developers. However, developers also mentioned that they did not understand the meaning of sink states, making it again hard to interpret. Why does DFASAT ignore possibly important errors by putting them in low frequency sink states? This is not easy to explain since it is actually good that the state-merging algorithm does not merge these potentially important error states with similar states, potentially losing track of the important error. This (mis)interpretability of passive learning tools highlights an important challenge for future research. Most passive learning tools are developed from a researchers perspective, and improvements are usually made on the type of models being learned. In this paper we show that very simple state machine models (see, e.g., Figure 1) are already very useful in practice. In essence, there is a mismatch between what the tool developers optimize (building better models) and what the developers want (to find bugs). We believe bad models can also be effective at finding bugs when presented in the right way. In our experience, the key open research challenges for tool adoption are: 1) The development of an interactive visualization of learned state machines that is useful for developers to locate bugs. 2) Studying how to integrate what developers know (such as code) with the learned models and algorithms in order to understandability. **IX. RELATED WORK** **Process mining.** Similarly to passive learning, process mining [30], [28] is a field that focuses mainly on business processes, and not so much on software systems specifically. In general, process mining uses so-called ‘event logs’ for this. Significant contributions in this area are implemented in the ProM framework [31], a large tool that contains several plug-ins for performing process mining and analyses on that. Process mining has been applied in a variety of sectors and organizations, such as municipalities, hospitals and banks. Van der Aalst et al. [29] did a successful case study on applying process mining at the Dutch National Public Works Department (Rijkswaterstaat). Similarly, Mans et al. [18] applied process mining at a Dutch hospital; their paper focuses on the applicability of process mining in a real scenario. They were able to mine the complex, unstructured process that they had in place. Furthermore, they indicate that they managed to derive understandable models. **Comparing inference techniques.** There is quite some existing work on comparing the algorithms and heuristics used for different inference techniques. For example, Lo and Khoo [17] introduce QUARK, a framework for empirically evaluating automaton-based specification minimizers. They use precision (i.e., how exact/correct is the machine?), recall (i.e., how many of the expected flows are represented by the inferred machine?) and probability similarity (i.e., how accurate are the frequencies of the different flows?) as metrics. Walkinshaw et al. [33] show how evaluating the accuracy of the inferred machines using precision and recall makes it easier to indicate whether an inferred machine is under-generalised or over-generalised. Pradel et al. [23] present a framework to evaluate how accurate mined specifications of API usage constraints are. Busany et al. [7] address the scalability problem that different algorithms that infer models from log data can have. They introduce a statistical approach to perform the analysis on a subset of the data, but with some statistical guarantees regarding the validity of the result. **Active learning.** Opposed to passive learning, active learning actively queries the system under learning to get the responses to different inputs, and that inferred machines are verified against that system. In recent work, active learning is commonly used for reverse engineering [32], and to verify an implementation against some specification [1], [12]. For example, Ruiter [26] used LearnLib [24] (a tool for active learning) for various EMV related analysis projects. For one of those projects, they were able to compare the models of two hardware tokens for internet banking, and to discover some undesired behavior in the older (unpatched) device. In addition, Smeenk et al. [27] infer the behavior of an embedded control device in a printer, and discover several deadlocks in that system. X. Conclusion We report on an industrial study of passive learning tools performed at Adyen. Our study indicates that passive learning is very useful in industry. Most importantly, it can be used to discover bugs and undesired behavior in Adyen’s point-of-sale devices. Although our study resulted in quite some impact in the point-of-sale development, we identified several shortcomings in all tested passive learning tools that limit their usability in practice. Most importantly, developers found it hard to fully understand the models returned by the tools. Although we developed modifications to DFASAT that try to overcome some of these shortcomings, they are only a small step towards improving the user experience of passive learning tools. There is much more work to be done. For example the inclusion of developer’s knowledge in the algorithm and its output, links to code responsible for certain errors to make debugging easier, improved visualizations, et cetera. Based on developer interviews, we identified two key open research challenges for the adoption of passive learning techniques by industry. In the near future, we expect to see more industrial studies of passive learning, hopefully making the gap between the tools and industry smaller and smaller. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/23335490/icsme2017.pdf", "len_cl100k_base": 9979, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35822, "total-output-tokens": 12610, "length": "2e13", "weborganizer": {"__label__adult": 0.0003445148468017578, "__label__art_design": 0.0002849102020263672, "__label__crime_law": 0.00034356117248535156, "__label__education_jobs": 0.0020122528076171875, "__label__entertainment": 6.16908073425293e-05, "__label__fashion_beauty": 0.00016689300537109375, "__label__finance_business": 0.00033926963806152344, "__label__food_dining": 0.0002999305725097656, "__label__games": 0.0005617141723632812, "__label__hardware": 0.0007510185241699219, "__label__health": 0.0004825592041015625, "__label__history": 0.0002014636993408203, "__label__home_hobbies": 9.614229202270508e-05, "__label__industrial": 0.0003859996795654297, "__label__literature": 0.0003018379211425781, "__label__politics": 0.0002491474151611328, "__label__religion": 0.0003561973571777344, "__label__science_tech": 0.02044677734375, "__label__social_life": 9.769201278686523e-05, "__label__software": 0.0053863525390625, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.00028705596923828125, "__label__transportation": 0.0004968643188476562, "__label__travel": 0.00017547607421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53923, 0.03034]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53923, 0.51398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53923, 0.93079]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5168, false], [5168, 11597, null], [11597, 15698, null], [15698, 21146, null], [21146, 24985, null], [24985, 29517, null], [29517, 34953, null], [34953, 39929, null], [39929, 46090, null], [46090, 53923, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5168, true], [5168, 11597, null], [11597, 15698, null], [15698, 21146, null], [21146, 24985, null], [24985, 29517, null], [29517, 34953, null], [34953, 39929, null], [39929, 46090, null], [46090, 53923, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53923, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53923, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5168, 2], [5168, 11597, 3], [11597, 15698, 4], [15698, 21146, 5], [21146, 24985, 6], [24985, 29517, 7], [29517, 34953, 8], [34953, 39929, 9], [39929, 46090, 10], [46090, 53923, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53923, 0.05729]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d00cd83be8d719b6a4b5470ae2e1dcc34714eac2
Format Unraveled Richard Bonichon, Pierre Weis To cite this version: Richard Bonichon, Pierre Weis. Format Unraveled. 28èmes Journées Francophones des Langages Applicatifs, Jan 2017, Gourette, France. hal-01503081 HAL Id: hal-01503081 https://hal.archives-ouvertes.fr/hal-01503081 Submitted on 6 Apr 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract Pretty-printing can be described as finding a good-looking solution to typeset data according to a set of formatting conventions. Oppen [6] pioneered the field with an algorithmic solution to pretty-printing, using the notions of boxes and break hints. The Format module is a direct descendant of this work: it is unfortunately often misunderstood or even misused. The first goal of this article is to enhance the available documentation about Format by explaining its basic and advanced features but also its relationship and differences with Oppen’s seminal work. The second goal is to investigate the links that Format has with the document-based pretty-printing tradition fostered by the lazy programming community [3,4,9,10]. 1. Introduction Reading and printing data are usual parts of day-to-day programming. As a witness to the truth of this statement, OCaml has two modules concerned with reading data (Pervasives, Scanf) and even more — three — with printing (Pervasives, Printf, Format). Usually, we want to obtain a certain regularity in this output, to have it formatted. A formatted output can be made to look more or less pretty. The definition of prettiness as a value is a rather philosophical matter [7]; nonetheless this is the goal of pretty-printing. That is, a pretty-printing module should have functionalities in order to help output structured data in a good-looking way. In OCaml, this can be achieved via the standard library with one of the two modules dedicated to the formatted output of data. On the one hand, there is the Printf module, which can be roughly described as an extended look-alike of the C family of printf functions. On the other hand, there is the Format module. At first sight, it seems rather similar to a fusion of Printf and Pervasives. But it comes with extra advanced capabilities which are often misused or misunderstood. The first goal of this article is to document extensively what the Format module offers, how it works (and why it works that way), and how to use it. Other programming languages, such as Python or Java, often have a format function or class. However, these are usually not akin to OCaml’s Format module, and more like Printf. Indeed, their names probably come from the fact that they are based upon format strings. Pretty-printing a la Format seems to be more common in functional languages. The Format module is indeed based on early work done by Derek Oppen [6]. Haskell has for example received a good amount of attention through, first, the works of Hughes [3] and Wadler [10]. Their work relies on the exploration of algebraic properties of Oppen-like pretty-printing and traditionally defines pretty-printing combinators. This line of work reaches the same level of efficiency as Oppen’s algorithm only with Swierstra and Chitil [9]; their functional pearl describes a combinator-based pretty-printing algorithm that have the same space (w.r.t to the line width) and time (w.r.t to the length of the stream) complexities as Oppen’s while retaining a lazy functional flavor. Later, Kiselyov, Peyton-Jones and Sabry [4] show how to program an elegant incremental pretty-printer using yield. The contributions of this paper are the following: • it revisits, complements and extends the standard documentation available for Format; • it explains the differences between Oppen’s original algorithm and what Format offers; • it discusses how Format differs from document-based pretty-printers advocated by the algebraic pretty-printing tradition and investigates how it can be made similar to them. 2. A brief history of Format: a quest for abstraction The Format module has a long history of development, starting from the early implementation of Oppen’s algorithm in Caml circa 1985 to the full-fledged module we describe in this article. The first implementation started for the compiler internal needs to print error messages and computed values. So far so good, the pretty-printer was able to decently display types and values on the terminal, when Caml was only available as an interactive system. The first step toward abstraction was to encapsulate the pretty-printer as a module and export it to user land. Then, the problem arose not to mix compiler messages and warnings with the output of the user’s program. It was time to add separate printing functions for stderr, stdout and general output channels. The idea of abstracting the low-level output device from the pretty-printing engine was born. To allow parallel pretty-printing to several files and output channels, the entire pretty-printer had to be abstracted. Here comes the Format.formatter data structure: a formatter encapsulates a complete pretty-printing engine with all its state and specific parameters into a value that can be manipulated in programs. Around 1990, Caml-Light added basic format strings to properly typecheck the printf function and provide a safe use of format strings. The introduction of a Format specific version of the printf family of functions gave rise to the addition of specifications for boxes and break hints management directly into format strings. Polymorphic printing with explicit formatter arguments was then made available via conversion %a (see Section 5). The quest for abstraction went on with semantic tags (Section 6), printing with continuations, and recently output abstraction with symbolic printing\(^1\). The future is rich with further endeavors (see Section 10). 3. Format basics Format can write on anything that can receive characters, such as strings, buffers, channels or streams. In this section, for sake of simplicity, we focus on basic primitives writing on the terminal (stdout). Format’s basic primitives can be divided into two sets: primitives to print elementary values and primitives for indentation and splitting lines (Sections 3.1 and 3.2). Printing elementary values in Format is similar to printing those values with the fundamental Pervasives module. One can print characters, strings, integers, floats and booleans with print_char, print_string, print_int, print_float and print_bool. The print_newline primitive in Format \(^1\)https://github.com/ocaml/ocaml/pull/615 prints a newline character as the corresponding `Pervasives` function, but its impact on the pretty-printing engine is major and should not be underestimated (see Section 7). ### 3.1. Break hints A break hint is an explicit annotation to tell the pretty-printing engine where it can split the line. A break hint also indicates the amount of spaces to add to the current indentation when splitting the line. Break hints for the pretty-printing engine can be given with the `print_space`, `print_cut` and `print_break` functions. The first function outputs a space break hint: it outputs a typographical space if there is no need to split the line or it splits the line according to the box discipline without adding indentation. The second one outputs a cut break hint: it does nothing if there is no need to split the line or it splits the line according to the box discipline (no indentation added). The last one outputs a full break hint: it has two parameters `nspaces` and `offset`. It outputs `nspaces` typographical spaces if there is no need to split the line or it splits the line according to the box discipline, adding `offset` spaces to the current indentation value. Those integer parameters can be negative: a negative `nspaces` is treated as 0, while a negative `offset` reduces the indentation of the next line. Note that space and cut break hints are convenient shortcuts for specific full break hints. ### 3.2. Boxes A pretty-printing box, or simply a box, is the fundamental device which delimits a region with a coherent discipline of line-splitting and indentation. There are five line-splitting disciplines corresponding to five types of boxes, with different effects on the output. Those types are h, v, hv, hov and b. h stands for horizontal, v for vertical, hv for horizontal/vertical, hov for horizontal-or-vertical and b for basic. Each box type is respectively opened with `open_hbox: unit -> unit`, `open_vbox: int -> unit`, `open_hvbox: int -> unit`, `open_hovbox: int -> unit` and `open_box: int -> unit`. When lines can be split, boxes have an extra indentation argument that specifies the amount of extra spaces added to the current indentation of the block when splitting lines. Let us now detail these boxes (Figure 1 shows a comparative look at their behaviors). ```ocaml let pp_int_list open_box l = let pp = function | [] -> () | [x] -> print_int x | x :: xs -> print_int x; print_space (); pp xs in open_box 0; pp l; close_box () let pp_int_list_h = pp_int_list (fun _ -> open_hbox ()); and pp_int_list_v = pp_int_list open_vbox and pp_int_list_hv = pp_int_list open_hvbox and pp_int_list_hov = pp_int_list open_hovbox and pp_int_list_b = pp_int_list open_box ;; set_margin 8 ``` Figure 1: Comparing box splitting discipline (margin 8, except margin 10 for h) Horizontal boxes A horizontal box or h-box groups contents to be printed on a single line, thus hiding the column limits of the pretty-printing engine. One idiosyncrasy to h-boxes: if the size of a horizontal box is bigger than the margin size left on the output device, its whole contents is printed on the next line. Vertical boxes A vertical box or v-box groups contents whose elements must each be printed on a separate line. Horizontal/vertical boxes A horizontal/vertical box or hv-box has two mutually exclusive behaviors: if the box fits on a single line, the box is said fitting and behaves as a horizontal box; otherwise, the box is said non-fitting and behaves as a vertical box. Horizontal-or-vertical boxes A horizontal-or-vertical box or hov-box is a compacting box: it outputs its contents on the same line while there is enough room left on the line. Then, the next break hint splits the line and the output goes on. A text output in a horizontal-or-vertical box with all spaces used as break hints is similar to left-justified paragraph in a text processor. Basic boxes A basic box or b-box is a compacting box similar to the horizontal-or-vertical box with a different way to handle break hints: if splitting the line reduces the current indentation, a break hint splits the line, even if there is still enough room left on the current line. Comparing compacting boxes: b-box versus hov-box Figure 1 shows that hov-boxes and b-boxes behave the same in simple cases. However, printing complex material with nested boxes shows up the difference. Figure 2 prints the same list of integers with the same pretty-printing function as Figure 1. In addition, Figure 2 uses a global compacting box to properly pretty-print the list between brackets. Both parts of Figure 2 run the exact same code except for the enclosing box: a hov-box for Figure 2a and a b-box for Figure 2b. The right margin is set to 8, thus there is enough room to print the closing bracket on the second line. In the case of the hov-box, there is no need to split the line at the cut break hint before the closing bracket. In the b-box case, the cut break hint splits the line, since splitting the line reduces the current indentation, and the closing bracket is displayed on a new line. This behavior aligns opening and closing delimiters, thus enforcing the list structure. This small example is too simple to show the true benefit of the b-box behavior. A more complex example, for instance printing a tuple of lists of records would be more telling. If such a value is printed within a hov-box, all the closing parentheses, brackets and braces appears at the end of line, possibly all in a row on the last line: the hov-box minimizes the number of lines of the output. This is less readable than using a b-box, which may add extra lines to emphasize the box structure, printing each closing character on a new line, properly indented with its opening sibling. In short, the b-box visually enhances the structure of the value, that is why a b-box is also known as a structural compacting box. ``` open_hovbox 0; print_string "["; pp_int_list_hov l; print_cut 0; print_string "]"; close_box 0;; ``` (a) Printing a list inside a hov-box ``` open_box 0; print_string "["; pp_int_list_hov l; print_cut 0; print_string "]"; close_box 0;; ``` (b) Printing a list inside a b-box Figure 2: Behavior comparisons: b-box vs. hov-box (margin 8) 3.3. Remarks Before concluding this section, we would like to provide a bit of context by discussing the reasons why Format’s primitives and behaviors are profoundly different to common text processors and to share some perspective with respect to what has been added thus far to Oppen’s algorithm. 3.3.1. Pretty-printing versus text processing The break treatment in Format is the reverse of usual text processing software where a normal space is breakable and you need indicate hard (non-breaking) spaces. This salient difference is on purpose and due to the somewhat opposite design and goals of text processing and pretty-printing software. In text processing, the input is structured via paragraphs, sections and subsections. Paragraphs are free flowing streams of words separated by spaces and punctuation signs. The job of the text processor is to respect and emphasize section markers and properly split paragraphs; clearly, spaces in paragraph should default to breakable, while spaces in section titles are certainly unbreakable. The adoption of such spacing conventions leads to almost no breakable annotations for spaces. In text processing, all breakable spaces behave the same: they all output a typographical space or open a new line starting at margin. Furthermore, splitting a paragraph after a word or the next is not a dramatic decision: a document typeset without following best practice remains perfectly readable and understandable. By contrast, in pretty-printing, the primary input is a computed value of some structured data. The job of the pretty-printer is to help the programmer to properly split the lines to respect and emphasize the internal structure of the value. Here, splitting a line and indenting the next one is of utmost importance to highlight this internal structure. Hence, the pretty-printer provides ways to carefully fix the indentation; in particular, Format boxes and break hints carry an argument to indicate the indentation of new lines. In pretty-printing, break hints fix the indentation of lines, so each break is specific. Furthermore, splitting a line at this break hint or at the next one is a dramatic decision that could wreak havoc the final document to the point that it becomes unreadable and difficult or even impossible to understand. This is precisely the case for some programming languages where indentation is significant such as Python and Haskell. As a final fundamental contrasting difference, the text in text processing is mostly hand-written and contains hand-written spaces, when the text in pretty-printing is mostly machine-generated by hand-written programs that compose small pieces of text separated by machine-generated break hints. 3.3.2. Oppen’s algorithm Oppen’s algorithm [6] is at the core of Format pretty-printing engine and also the basis for algebraic studies of the lazy community. This section sums up its main components and insights. In his article, Oppen introduces the notions of box and break hint (called a blanks). Inside a box, blanks can be consistent, causing a Format hv-box or inconsistent, yielding a Format hov-box. Each blank has a length and an offset, just as in Format. The algorithm is based on the interplay between two functions: print and scan. The latter represents the stream to be pretty-printed. The former effectively prints the material: a string is always printed; if a box is open, its indentation is pushed on a stack; if one is closed, it is popped; if a blank is received, it is printed if it fits on the line otherwise the line is split and indented according to this blank and the current box (on the top of the stack). The latter appends tokens to a buffer: to each string and openbox token, it also associates its length; to each blank, it associates the length of the blank plus the length of the next block, in order to check if it can print the coming block. The core of the algorithm is very similar to Format internals. Format retains the same linear complexity as Oppen’s proposal. Furthermore, Format adds the two sub-components of the hv-box, the h-box and the v-box, as well as the b-box. It also supports fully typed format strings, semantic tags and, last but not least, abstraction. 4. Format strings In OCaml, there exists a basic notion of format string value with a corresponding format string type. A format string value is a concise way of specifying a sequence of value arguments, in particular the type and shape of each argument of the sequence. Since OCaml language constructions have to be statically typechecked, arguments of input/output procedures should be specified so that their types can be verified. Format string values are the natural polymorphic way to specify all the arguments of advanced input/output functions: indeed, format string values specify any sequence of values to be read using module `Scanf` or printed using modules `Format` or `Printf`. 4.1. Syntax of format strings The syntax of format strings is identical to the syntax of OCaml basic strings, namely a sequence of characters between double quotes. However, sequence of characters inside format strings must obey a specific and constrained syntax to describe types and shapes of arguments. Following the C tradition, argument specifications are introduced by special marker `%`, followed by a letter giving the type of the argument. For instance, `%i` indicates an integer argument and specifies type `int` for this argument. Still following the C tradition, argument specifications are called conversions. OCaml format strings support specific conversions for basic types, such as `string` with `%s`, `float` with `%f`, `bool` with `%b`, `char` with `%c`, and so on. Apart from types, argument shapes may be specified via several means. One can indicate an alternate conversion: for instance all conversions `%d`, `%x`, `%X`, and `%o` specify an integer argument, but each of those conversions fixes a different notation for the integer. Indeed, `%d` prints (or reads) decimal digits, `%x` or `%X` hexadecimal digits, and `%o` octal digits. Similarly, both `%s` and `%S` specify a string value, but conversion `%S` specifies a string delimited with double quotes and using the OCaml lexical conventions to escape characters. Also, one can add optional size and precision specifications by extra characters after the conversion marker (for instance `%4.12g`), as well as padding and alignment annotations, which are absent from the basic functionalities described in Section 3. Format strings can also contain material unrelated to argument specifications: the formatting indications do not specify the type or shape of arguments but the presentation of arguments. The presentation indicates how arguments should appear in a document (i.e. in a sequence of characters). That is, a formatting indication states how to display an argument when printing, or how to read an argument when scanning. In format strings, such a formatting indication is introduced by the special marker `@`, followed by a sequence of letters specifying the indication. Note that formatting indications do not interfere with the typing of format strings. Interpretation of formatting indications may also be module specific: some formatting indications for reading are not meaningful for printing and vice versa. There is a complete set of formatting indications to drive the Format pretty-printing engine: opening and closing formatting boxes, emitting break hints, even flushing the pretty-printing engine to terminate a pretty-printing routine. For instance "@[" opens a box and "]@" closes the last opened box. Similarly, formatting indication "@ " emits a space break hint and "@," emits a cut break hint. When a formatting indication needs an argument, it has to be enclosed between characters '<' and '>'; for instance, adding a box kind argument to the box opening formatting indication gives "@[<h>". "@[<v>". "@[<hv>". "@[<hov>". and "@[<b>" to open the corresponding boxes (defined in Section 3.2). Figure 3 shows how to reproduce Figure 1 with boxes in format strings. If another additional argument is necessary, simply add it after a space: "@[<v 2>" opens a vertical box with 2 as indentation increment. Similarly, a full break hint is introduced by "@;" and needs two integer arguments: it is written as "@;<1 2>". The last set of characters in format strings are plain characters, that is any character not preceded by the conversion marker % nor by the formatting indication marker @. This is regular text included in a format string to be output or read as verbatim material. Note that markers are considered plain characters if preceded by a % character; write %% or %@ to obtain a plain % or a plain @ character. Do not be confused by the specific usage of format strings: they are first-class citizen of the language. Hence, a format string can be returned as a result, passed as an argument or manipulated as any other value. For instance, the predefined infix operation ^^ implements the concatenation of format strings: fmt1 ^^ fmt2 is equivalent to format string fmt1 followed by format string fmt2. The function Pervasives.string_of_format gives the string representation of any format string. Conversely, Pervasives.format_of_string returns the format string value corresponding to a string with known characters (a string literal). To convert to format string, a statically unknown string, for example a string read from a file, use Scanf.format_from_string (see Section 4.3). Caveat: pattern matching for format strings is not yet available but comparing their string representations may help. 4.2. Typechecking format strings The typing of format strings is specific, complex, and highly polymorphic. Indeed, the general type constructor able to accommodate all format strings peculiarities needs 6 different type variables: this holds the record for the most polymorphic datatype of the entire OCaml library. Format strings have a general and highly polymorphic type ('a, 'b, 'c, 'd, 'e, 'f) format6. Let’s give more meaningful names to those type variables renaming them respectively as 'functional_type, 'low_level_device, 'poly_printer_result, 'poly_reader_functional_type, 'poly_reader_result, 'result_type. 'b ('low_level_device) is the type of the low-level device for the format string, an input device for scanf-like functions and an output device for printf-like functions 'f ('result_type) is the result type of the format string, it is the result type of the receiver for scanf-like functions and the result type of printf-like functions 'a ('functional_type) is 'argument_sequence -> 'result_type, where 'argument_sequence is the type of the sequence of arguments to print or of values to read. For the Scanf family of functions 'functional_type is also the type of the receiver function. 'c ('poly_printer_result) is the result type of polymorphic pretty-printers required by %a conversions in the format string (hence a polymorphic pretty-printer printing values of type 't has type 'low_level_device -> 't -> 'poly_printer_result). Conversion %a is detailed in Section 5. 'd ('poly_reader_functional_type) is 'poly_reader_sequence -> 'poly_reader_result, where 'poly_reader_sequence is the type of the sequence of polymorphic readers required by all the %r conversions in the format string, and ‘e’ is the result type of ‘poly_reader_functional_type’ (hence a polymorphic reader reading values of type ‘t’ has type ‘low_level_device -> ‘t’). 4.3. Typing primitive functions on format strings The function `string_of_format` maps any format string to its corresponding string representation. Hence, its type scheme is naturally: (`a`, `b`, `c`, `d`, `e`, `f)` format6 -> string. On the other hand, the type of `format_of_string` is (`a`, `b`, `c`, `d`, `e`, `f)` format6 -> (`a`, `b`, `c`, `d`, `e`, `f)` format6 This is surprising in more than one way. First, because the input type of the function is not string! Second, because that type is an instance of the type scheme of the identity function (namely, the type of identity restricted to format strings). So, in the first place, how can `format_of_string` be applied to a value of type `string` when its source type is _ format6 ? However, the function indeed converts a string to a format string, as in ``` # format_of_string "%d";; - : (int -> 'a, 'b, 'c, 'd, 'e, 'f) format6 = "%d" ``` There is some black magic at work here. It indeed lies in the typechecking of string constants. In presence of a string constant expression, the typechecker follows a pragmatic rule: if the expression is expected to be a format string, then its contents is analyzed to discover its format6 type; otherwise, it gets type string. For instance: ``` # ("%d" : _ format6);; - : (int -> 'a, 'b, 'c, 'd, 'e, 'f) format6 = "%d" ``` On the other hand, if the string constant is bound to identifier s, then it gets type string and cannot be applied to `format_of_string` anymore: ``` # let s = "%d" in format_of_string s;; Error: This expression has type string but an expression was expected of type ('a, 'b, 'c, 'd, 'e, 'f) format6 ``` The documentation clearly states it: `format_of_string` converts a string literal to a format string. In fact, `format_of_string` simply checks that a string constant is a valid format string. If you need to convert any string, not only a literal string, you need `Scanf.format_from_string` that can read any string and convert it using a format string pattern. `Scanf.format_from_string` has type `string` -> (`a`, `b`, `c`, `d`, `e`, `f)` format6 -> (`a`, `b`, `c`, `d`, `e`, `f)` format6. The first `string` argument is simply the string to be converted, but the second argument is more intriguing: it is the model of the expected format string result: a static witness for the type of the expected format string result. ``` # let s = "Price = %.2g" in Scanf.format_from_string s "%f";; - : (float -> 'a, 'b, 'c, 'd, 'e, 'f) format6 = "Price = %.2g" ``` `Scanf.format_from_string` indeed verifies that the given string can be assigned the type of the second argument format string pattern. 5. Polymorphic printing `Format`’s killer feature trio is `fprintf`, `formatter`, %a: this is the way to polymorphic and compositional pretty-printing. A *formatter* is the abstraction of a complete pretty-printing engine that can be specialized to various tasks: the low-level output device, the parameters for margins, the treatment of various semantic aspects of the pretty-printing engine can all be encapsulated into a *formatter*. For instance, use *formatter_of_out_channel* to get a formatter that outputs to a given *out_channel*, or *formatter_of_buffer* to get a formatter that outputs to an extensible string buffer. A routine with an explicit *formatter* argument is completely generic with respect to the pretty-printing engine and is called a *pretty-printer*. According to its *formatter* argument, the routine can write to *any* low-level output device; more importantly, it can behave according to any high-level pretty-printing abstraction that can be defined as a *formatter*. The function *Format.fprintf* is such a generic pretty-printer and in fact the most general in *Format*. *fprintf* takes a *Format.formatter* as first argument (in the name *fprintf*, *f* stands for *formatter*). This way, *fprintf* subsumes the entire *printf* family: choosing the *formatter* argument turns *fprintf* to a specific function. For instance, *printf*, *eprintf*, *sprintf* are equivalent to using *Format.fprintf* with *std_formatter*, *err_formatter*, *str_formatter*. The polymorphic conversion specification *%a* is a specific addition to OCaml *format strings*. Intuitively, *%a* means “use the following function to convert the next argument”. So in fact *%a* specifies two arguments, a function *f* and a value *x* so that *f* can print value *x*. More precisely, *f* must print *x* on the *low-level device* specified by the *format string* that includes the *%a* conversion. Hence, if *fmt* has type (*_, 'low_level_device, _) format6* and *x* has type 't, then *f* must have type 'low_level_device -> 't -> .... In short, *f* must be a pretty-printer. Conversion *%a* is particularly noteworthy because, as the type indicates, the conversion is *polymorphic*. Furthermore, since *%a* also abstracts a function, it allows composition of pretty-printers. In short, *%a* is the truly functional *format string* conversion! ### Conversion *%a* and *fprintf* at work To illustrate pretty-printer composition, we write a pretty-printer for a simple expression algebraic datatype, then a polymorphic pretty-printer for pairs of values. ```ocaml let pp_int ppf = fprintf ppf "%d" let pp_pair pp_x pp_y ppf (x, y) = fprintf ppf "[@(\%(%a, %a)@)@]") pp_x x pp_y y let pp_int_pair = pp_pair pp_int pp_int let rec pp_expr ppf = function | Int n -> fprintf ppf "%i" n | Add (e1, e2) -> fprintf ppf "(@[@(\%(%a + %a)@)@]@)" pp_expression e1 pp_expression e2 and pp_expression ppf = fprintf ppf "[@(\%(%a)@)@]@)" pp_expr ``` Figure 4: *Format.fprintf* at work The pretty-printer for simple integer expressions with addition uses the format string "(\%(%a + %a)@)" to write additive expressions. The version given in Figure 4 adds break hints and ensure proper boxing through two mutually recursive pretty-printers. As we can see, the composition of pretty-printers via the *%a* conversion has one peculiarity: the *formatter* argument is implicitly applied to the pretty-printing function argument. The polymorphic pair pretty-printer uses two *%a* conversions to print each element of the pair; hence, it needs abstract two pretty-printers and a formatter. Then it uses a *format string* like "(\%(%a, %a)@)". Adding formatting indications to the *format string*, we get the *pp_pair* function of Figure 4. To define a pretty-printer for specific pairs, simply follow the usual functional programming way and apply *pp_pair* to two pretty-printers as in *pp_int_pair*. 9 6. Semantic tags Format offers another extension to Oppen’s original proposal. It has the ability to interpret specific pretty-printing hints called semantic tags. In format strings, a tagged section is delimited by "@{<t> ... @}" for tag t. The interpretation of those tags is purely user-driven as the programmer must supply appropriate open_tag and close_tag functions. These come in two flavors: marking and printing when tags are respectively opened and closed. Tag printing functions are intended to emit formatting instructions (open a box, put a break, etc.) while tag marking functions simply emit a 0-length string marker associated to the tag. Basic use of tag marking functions is for example to print opening and closing markers in HTML. As tag markers are considered of length 0, they do not interfere with line splitting or indentation. Also note the order of invocation of tag handling functions: when a tag is opened, print_open_tag is called first then open_mark_tag; when a tag is closed, close_mark_tag is called first, then print_close_tag. We illustrate using semantic tags with two examples. The first example uses tags to optionally enable color printing for terminal outputs. The second provides two different outputs from the same tagged content. Both cases are handled almost seamlessly with semantic tags. Colors The first example consists in coloring the output, for example for a logging module. Tags can be turned on or off depending on the output device. The interpretation of how to color the output could even be device dependent. This example has two modes: when the output is done on a terminal, tags emit ANSI color escape sequences, otherwise they are left uninterpreted. The latter is better if the output formatter is a device where color escape sequences have no special meaning. In this example (see Figure 5), we restrict ourselves to three foreground colors: yellow, purple and cyan, whose corresponding escape sequences are 33, 35 and 36. All attributes are off by default (hence the additional 0), except for yellow which is bold (represented by the value 1). The mark_close_tag function always emits a “reset to default” sequence. Different outputs for the same tags Tags provide means to have different concrete outputs for the same initial data. In this case, tags can be seen as ways to embed simple node annotations into the output. One could annotate the pretty-printer of any datatypes with simple tags and process these tags differently according to the desired output (you could thus “serialize” a type through tags). The example has two simple outputs, one in HTML and one in Emacs org format for the same initial set of tags. The code is shown in Figure 6. Note that the print_open_tag and print_close_tag functions do not have a formatter argument: they would always print to Format.std_formatter if left alone. Thus, it is necessary to bind them to the proper formatter whenever using them. For HTML, we want the closing </ul> tags to be indented the same as its opening companion. This is the primary use of a basic box. Then, we want all <li> list items to be vertically aligned inside the list. This desired behavior mixes boxes and printing, thus it can only be defined in tag printing functions and not in tag marking functions. Note that we print <ul> tags as 0-length items. In effect, this mimics what marking functions do. In org, lists are simple vertically aligned paragraphs, preceded by a - sign. The usual notion of paragraph is handled by a hov-box in the print_open_tag and print_close_tag functions. --- let str_to_esc_seq color_name = match String.lowercase color_name with | "cyan" -> Some "0;36" | "purple" -> Some "0;35" | "yellow" -> Some "1;33" | _ -> None let color_tag_funs = { mark_open_tag = (fun tag_string -> match str_to_esc_seq tag_string with | None -> "" | Some eseq -> sprintf "\027[%sm" eseq); mark_close_tag = (fun _ -> "\027[0m"); print_open_tag = (fun _ -> ()); print_close_tag = (fun _ -> ()); } let pp_colorized ppf fmt = pp_set_formatter_tag_functions ppf color_tag_funs; let mark_tags = pp_get_mark_tags ppf () in pp_set_mark_tags ppf true; kfprintf (fun ppf -> pp_set_mark_tags ppf mark_tags) ppf fmt ;; open Format let fmt = format_of_string "\[<v 0>Default\[<v 0>\] @<cyan>Cyan\[<v 0>\] @<yellow>Bold Yellow\[<v 0>\] @<purple>Purple\[<v 0>\] @<uninterpreted>Default\[<v 0>\]." Figure 5: Colored terminal output with semantic tags let html_tag_functions ppf = let mark_open_tag s = if s <> "ul" then "<" ^ s ^ "" else "" and print_open_tag = function | "ul" -> fprintf ppf "@[<b>@<0>%s@[<v 2> "<ul>" | "li" -> fprintf ppf "@ [<hov 0>" | "p" -> fprintf ppf "@[<hov 0>" | _ -> () and print_close_tag = function | "ul" -> fprintf ppf "@</ul>" | "li" -> fprintf ppf "@</li>" | "p" -> fprintf ppf "@</p>" | _ -> () and mark_close_tag s = if s <> "ul" then "</" ^ s ^ "" else "" in { mark_open_tag; mark_close_tag; print_open_tag; print_close_tag; } let pp_html ppf = dedicated_pp (html_tag_functions ppf) ppf ;; let org_tag_functions ppf = let mark_open_tag _ = "" and print_open_tag = function | "ul" -> fprintf ppf "@<v> | "li" -> fprintf ppf "- @<hov> | _ -> () and print_close_tag = function | "ul" -> fprintf ppf "@] | "li" -> fprintf ppf "@ ] @ | _ -> () and mark_close_tag _ = "" in { mark_open_tag; mark_close_tag; print_open_tag; print_close_tag; } let pp_org ppf = dedicated_pp (org_tag_functions ppf) ppf ;; <pp>This paragraph precedes a list:<p> <ul> <li>This first item might be too long</li> <li>Second item</li> </ul> Figure 6: Tag interpretation for different outputs 7. Guidelines for using Format Proper use of Format requires a certain discipline to maximize its help. Here are some guidelines. Guideline 1 (Boxing rules). Before using Format, thou shalt know thy boxes. In particular: 1. If you do not open a box, there is no guarantee and no semantics. 2. When the pretty-printer is reset, it empties all its stacks and queues and, as of today, open a \textit{b-box} with offset zero. This has changed in the past and could change again any release. 3. So, a box is open by default. But, as you cannot assume which one, you shall always open one. Guideline 2. Format helps those who help Format. Do not hesitate to add break hints or open new boxes. This helps to avoid various symptoms such as: way too long lines or contents spread on many small lines vertically aligned at the right margin. Remember: the cost of opening boxes and adding break hints is dwarfed by the cost of outputting the content. Guideline 3 (Flushing discipline). It is mandatory to flush the pretty-printing engine at the end of pretty-print, to print all the material waiting for good rendering in the pretty-printing engine data structures. You shall not flush the pretty-printing engine at random, either using "@." (\texttt{print\_newline}) or "@?" (\texttt{print\_flush}), because flushing automatically closes all boxes and tags. This breaks the box splitting discipline. Guideline 4 (Newline). Formatting indications for newline "@\n ", or flush and newline "@." are delicate to use. Adding a newline which is not computed by the pretty-printing engine is risky at best for it breaks the box splitting discipline. If you need to split lines, simply open a v-box and output normal break hints: inside a v-box, each break hint will print a newline as desired, but all open boxes will stay active and the document rendering will continue normally. As an extra benefit, inside a v-box line splitting occurs without low-level device flush, thus usually improving efficiency. Guideline 5 (Use fprintf and \%a). Function fprintf gets a Format.formatter first argument. Via \%a conversions, fprintf can compose pretty-printers in a generic and natural way. Guideline 6 (Abstract the formatter). To make your routines generic and compatible with \%a, promote them to pretty-printers by adding an explicit Format.formatter argument. 8. Document generation with Format There are mainly two traditions when it comes to pretty-printing. In the functional world, a document-based approach has garnered much attention. The other tradition comes directly from Oppen’s seminal article and has lead to the Format module in OCaml. This section discusses those two different approaches. Spoiler: this has a lot to do with the fundamental lazy/strict differences. We will also show how one can extend Format to create a document. Document-based pretty-printing has been championed by Hughes \cite{3} and Wadler \cite{10}, promoting thereby the ability to work at an algebraic level. In this setting, a document can be abstracted as either a string (Text), a potential line break with indentation (Line), the concatenation of two documents (Concat), or a group, that is a unit with line breaks interpreted consistently. That is why a group is translated to a Format hv-box. The document type is defined in Figure \cite{2}. In a lazy setting (call by name/need) values may be suspensions that are not yet been completely computed, but will be computed as much as desired "on demand". For instance, appending lists can cost almost nothing since elements of the resulting list will be constructed and consumed as necessary by the function using the result (call by need features a kind of "pipeline effect for free"!). In a strict setting (call by value), data must be completely built before usage: in the case of list concatenation, it means that the entire resulting list is built before its first element is made available. This could partly explain why building a complete document before printing is in some sense a conceptual notion in a lazy setting: the parts of the document will be build on demand, while printing the document. No extraneous data structure is built before printing, the values to be printed are computed and used to drive the pretty-printing routines. In a strict setting, these values are completely computed and built anyway, so there is no extra cost in pretty-printing them. Hence, in a strict setting, building documents could be much more expensive, since the final document data structure is entirely built before printing starts. Using modern computers, document construction could be fast enough to be tolerable or amount to a fraction of the total cost of pretty-printing. Actually, in the blog post announcing Pprint\textsuperscript{3} an OCaml library providing combinators for building and printing documents, Pottier similarly notes: One limitation of the library is that the document must be entirely built in memory before it is printed. So far, we have used the library in small-to-medium scale applications, and this has not been a problem. In principle, one could work around this limitation by adding a new document constructor whose argument is a suspended document computation. As said before, document building is driven by values. In Format, values drive the pretty-printing engine without the need of a document. In a way, Format's pretty-printing engine simply avoids the construction of documents or uses a virtual construction of the document that it prints before even building it! Building a document instead of pretty-printing could simply be a debugging option of the pretty-printing engine: it could be useful to match the semantics of pretty-printers by looking their pretty-printing meaning, instead of painfully guessing from the pretty-printing engine output. This suggests to add to the pretty-printing engine a formatter that would build a document instead of printing its virtual representation. Such an approach is presented in Figure 7. The basic primitives of Format are redefined to simply emit building elements in a abstract_document stack. In a sense, this stack is a primitive document at this point. In order to approximate the basic notions of group, line and text, the stack needs to be post-processed via eval_abstract_doc to generate a value of type document. eval_doc closes the loop: it pretty-prints a document. This seat-of-the-pants implementation should be refined in the Format spirit: we need add a specific type of pretty-printing formatter that would output such a document. True, it sometimes feel like we are trying to artificially fit a square peg in a round hole. Also, the initial languages are not totally equivalent in terms of expressiveness. Even Wadler's and Hughes's approaches have this problem. Yet, bridging the gap in the opposite direction, from combinators to Format/Oppen, is not trivial: the main problem is time efficiency. In their works, Chitil and Swierstra \cite{1,9} arrive at an optimally bounded solution. Actually, Chitil's \cite{1} solution starts from deriving a construction similar to the abstract_document of Figure 7. 9. Related work Pretty-printing with combinators has garnered interest notably in the lazy community. There has been a continuous trend toward, objectively, more efficiency as well as more algebraic considerations, and also, more subjectively, prettier outputs. However it all began with Oppen \cite{6} and work in LISP \cite{11}. \footnote{http://gallium.inria.fr/blog/first-release-of-pprint/} let content, next = accu_until_close l in Concat (Group (b, content), loop next) | ABreak n :: 1 -> Concat (Line n, loop l) | ACloseBox :: _ -> failwith "No open box to close" and accu_until_close = function | [] -> failwith "No closing of open box" | ACloseBox :: l -> Text "", l | AText s :: 1 -> let content, next = accu_until_close l in Concat (Text s, content), next | AOpenBox b :: l -> let content, next = accu_until_close l in let content', next = accu_until_close next in Concat (Group (b, content), content'), next | ABreak n :: l -> let content, next = accu_until_close l in Concat (Line n, content), next in loop (List.rev stack) let rec eval_doc ppf = function | Text s -> fprintf ppf "%s" s | Concat (d1, d2) -> fprintf ppf "%a%a" eval_doc d1 eval_doc d2 | Group (Hv n, d) -> fprintf ppf "%[@hv %d%a@]%a" eval_doc n eval_doc d | Line n -> pp_print_break ppf n 0 Figure 7: Document generation in Format Leijen has implemented Wadler’s ideas in Haskell. One drawback of Wadler’s proposal is that its complexity is non-linear. This together with the fact that earlier proposals were “not pretty enough”, lead Bernardy to offer another Haskell-based library on the same algebraic principles. Chitil offers a very thorough summary of the existing proposals before giving a more efficient solution. Swierstra and Chitil seem to have the final word, for now, with their combinator-based functional pretty-printing algorithm that retains the linear space and time complexities of Oppen’s algorithm. As Oppen’s, it does not need to have the full document to start printing out text. This joint article extends both author’s previous works. In particular, Chitil has benchmarks showing the relative efficiencies of his proposal, Hughes’s and Wadler’s. Some years later, Kiselyov, Peyton-Jones and Sabry describe a very elegant use of yield to implement an incremental linear pretty-printer. Such document-based implementations have also been made for OCaml. For example, Pottier, with Pprint, Tayanovsky and Lindig have implemented pretty-printer combinators inspired by Wadler’s work. Pottier’s module actually implements Leijen’s ideas in the OCaml world. Out of the functional programming communities, there are various implementations of Oppen’s algorithm. For example, Giese has implemented Oppen’s algorithm in Java. Wadler’s algorithm has implementations in languages like Rust or Javascript/Node. To the best of our knowledge, there is no standard library support as in OCaml or no such deep development as in Haskell. --- [8] https://github.com/epsilonz/pretty.rs 10. Conclusion We have provided an extended introduction to Format, its basic and more advanced features, but also replaced it in the more general pretty-printing landscape. Our hope is that it increases the understanding and the use of this module of the OCaml standard library. Techniques based on Oppen’s algorithm do have some limitations since the initial goal was to strike a balance between expressiveness and efficiency. For example, it is impossible for the offset of Oppen boxes to depend on what is going to be printed in the future. Format module exhibits the very same limitations. Document-based pretty-printing can have that feature. Indeed, it might have access to the whole document before deciding what to do since the document must be built. In this case, constructing the whole document might render the output prettier. By and large, we are convinced that Format is already a good solution to the problems of pretty-printing data for the working programmer. Yet, it can be improved in a number of ways. We are considering adding fully abstract printing (document production without I/O side effects), printing of polymorphic data structures (not to be confused with now available polymorphic printing of monomorphic values) and tables (column-formatted outputs). We are currently experimenting with these subjects. We would like to make these extensions work for all modules using format strings (Printf, Scanf). We are hopeful that it will provide new and interesting ways to handle formatted data in OCaml. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01503081/file/format-unraveled.pdf", "len_cl100k_base": 11064, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 47064, "total-output-tokens": 12775, "length": "2e13", "weborganizer": {"__label__adult": 0.0002791881561279297, "__label__art_design": 0.0004546642303466797, "__label__crime_law": 0.00016248226165771484, "__label__education_jobs": 0.0004038810729980469, "__label__entertainment": 7.867813110351562e-05, "__label__fashion_beauty": 9.810924530029296e-05, "__label__finance_business": 9.92417335510254e-05, "__label__food_dining": 0.00024890899658203125, "__label__games": 0.0003933906555175781, "__label__hardware": 0.0006318092346191406, "__label__health": 0.000209808349609375, "__label__history": 0.00018739700317382812, "__label__home_hobbies": 6.365776062011719e-05, "__label__industrial": 0.00021386146545410156, "__label__literature": 0.00025081634521484375, "__label__politics": 0.00015437602996826172, "__label__religion": 0.0003643035888671875, "__label__science_tech": 0.009063720703125, "__label__social_life": 7.861852645874023e-05, "__label__software": 0.007228851318359375, "__label__software_dev": 0.97900390625, "__label__sports_fitness": 0.00017392635345458984, "__label__transportation": 0.00023627281188964844, "__label__travel": 0.0001348257064819336}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50893, 0.02127]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50893, 0.65447]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50893, 0.84528]], "google_gemma-3-12b-it_contains_pii": [[0, 851, false], [851, 3571, null], [3571, 7070, null], [7070, 9889, null], [9889, 13620, null], [13620, 17554, null], [17554, 21779, null], [21779, 24657, null], [24657, 27638, null], [27638, 31400, null], [31400, 35041, null], [35041, 37270, null], [37270, 40604, null], [40604, 44826, null], [44826, 47803, null], [47803, 50893, null]], "google_gemma-3-12b-it_is_public_document": [[0, 851, true], [851, 3571, null], [3571, 7070, null], [7070, 9889, null], [9889, 13620, null], [13620, 17554, null], [17554, 21779, null], [21779, 24657, null], [24657, 27638, null], [27638, 31400, null], [31400, 35041, null], [35041, 37270, null], [37270, 40604, null], [40604, 44826, null], [44826, 47803, null], [47803, 50893, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50893, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50893, null]], "pdf_page_numbers": [[0, 851, 1], [851, 3571, 2], [3571, 7070, 3], [7070, 9889, 4], [9889, 13620, 5], [13620, 17554, 6], [17554, 21779, 7], [21779, 24657, 8], [24657, 27638, 9], [27638, 31400, 10], [31400, 35041, 11], [35041, 37270, 12], [37270, 40604, 13], [40604, 44826, 14], [44826, 47803, 15], [47803, 50893, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50893, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
e6b7e7933d9178adb309d8d6a3902a90f0320fe9
A software tool for interactive generation, representation, and systematical storage of transfer functions for 3D medical images M. Alper Selver a,∗, Felix Fischer b, Mehmet Kuntalp a, Walter Hillen b a Dokuz Eylul University, Electrical and Electronics Engineering Department, Kaynaklar Campus, Buca, Izmir 35160, Turkey b FH-Juelich, University of Applied Sciences, Medical Informatics Laboratory, D-52428 Juelich, Germany ARTICLE INFO Article history: Received 6 March 2006 Received in revised form 8 January 2007 Accepted 14 March 2007 Keywords: DICOM Histogram Java Transfer functions Visualization ABSTRACT As being a tool that assigns optical parameters, i.e. color, transparency, used in interactive visualization, transfer functions have very important effects on the quality of volume rendered medical images. However, finding accurate transfer functions is a very difficult, tedious, and time consuming task because of the variety of all possibilities. By addressing this problem, a software module, which can be easily plugged into any visualization program, is developed based on the specific expectations of medical experts. Its design includes both a new user interface to ease the interactive generation of the volume rendered medical images and a volumetric histogram based method for initial generation of transfer functions. In addition, a novel file system has been implemented to represent 3D medical images using transfer functions based on the DICOM standard. For evaluation of the system by various medical experts, the software is installed into a DICOM viewer. Based on the feedback obtained from the medical experts, several improvements are made, especially to increase the flexibility of the program. The final version of the implemented system shortens the transfer function design process and is applicable to various application areas. © 2007 Elsevier Ireland Ltd. All rights reserved. 1. Introduction The goal of medical visualization is to produce clear and informative pictures of the important structures in a dataset. Volume visualization is currently in use as a tool to help in diagnosis, surgery, radiological treatment planning, and anatomical education. Therefore, several research activities addressing the limitations of current visualization systems are aiming to come up with new techniques which will carry volume visualization from research and teaching hospitals to routine clinical work. Volume rendering [1,2] is an important technique since it displays 3D images directly from the original dataset and provides “on-the-fly” combinations of selected image transformations such as opacity and color. The only interactive part during the generation of the volume rendered medical images is the transfer function (TF) specification, therefore it is important to design effective and user friendly tools for handling this parameter [3]. Unfortunately, finding good TFs is a very difficult task because of the availability of various possibilities. Since this flexibility cannot be kept in strict bounds, finding an appropriate TF for a meaningful and intelligible volume rendering is very hard. Current approaches for TF specification can be divided into three groups as manual, data centric and image centric techniques. The Manual approach addresses the need for expert intervention in generating the final image. It states that data ∗ Corresponding author. Tel.: +90 232 4127176. E-mail addresses: alper.selver@eee.deu.edu.tr (M. Alper Selver), fischer@fh-aachen.de (F. Fischer), mehmet.kuntalp@eee.deu.edu.tr (M. Kuntalp), hillen@fh-aachen.de (W. Hillen). 0169-2607/$ – see front matter © 2007 Elsevier Ireland Ltd. All rights reserved. doi:10.1016/j.cmpb.2007.03.008 exploration is an essential element of creating the TF if the images were to fulfill the observer’s expectations and to be considered efficient. It is based on the idea that methods which serve to generate images without human interaction would produce nice but ineffective images since they do not consider specific needs of the users [3]. The Data-Centric approaches are based on measuring the dataset properties. Bajaj et al. [4] have used isovalue determination to find the contours that are hidden inside another and show that the isosurface may have more than one component if the isocurves display is associated with a contour tree. Kindlmann and Durkin [5] have assumed that the features of interest in the data are the boundary regions between areas of homogenous material. They used edge detection concepts from computer vision area to define the values associated with these boundaries as opaque. Other data centric techniques use topology analysis [6], stochastic properties of datasets [7], and multidimensional data analysis, i.e. creation of a 3D histogram of data values versus the first and second derivatives [5,8]. The Image-Centric approaches, on the other hand, are based on evaluating TFs on the basis of images they produce where the user can select from among all rendered images presented [9–13]. Currently, neither Data nor Image-Centric techniques are being used in daily clinical work by medical experts because both approaches have some important drawbacks. First of all, TF specification for medical volume visualization is a very subjective task and experts always want to interact with volume data easily and quickly. Since automatic and semi-automatic methods (Data and Image-Centric approaches) cannot take the advantage of user intuition, they leave the user with limited control. In other words, exploring the entire parameter space is no longer possible by using these techniques. Moreover, no proper user interface has been developed for interaction with statistical or metric information provided by Data-Centric techniques limiting their use by medical experts. Image-Centric techniques effectively change the user’s search from an abstract mathematical one to a visually guided one but these techniques have no user interaction. This strategy requires substantial user testing (that has not yet been done) because relieving the user from the data exploration process may be counter productive. Finally, as potentially hundreds of different renderings have to be made, they rely on fast rendering hardware being available to reach their full potential which in its turn increases the costs and vendor dependency. Because of these disadvantages, the Manual approach is still the current technique in use during daily clinical work. However, manual approach itself is time consuming and difficult, requiring user experience. When there is no prior knowledge about the dataset being visualized, it is hard and tedious to design a TF manually. In this study, a TF editor (TFE) has been developed with a semi-automatic initial TF design method and with several interaction techniques and functions to cover the drawbacks of the Manual approach and other existing programs. A semi-automatic histogram based method is developed to shorten the design process by creating an initial TF. The advantage of the developed method is that it does not limit the user’s control of the parameter space, creating a good starting point. Moreover several functions are added to increase the user interaction in the design process. These functions are developed to cover the drawbacks of the current TF specification programs. One of these drawbacks is that existing graphical user interface (GUI) designs are not flexible enough [14]. No efficient GUI has been designed which provides the information about the affect of changing a parameter (i.e. color variation) without applying it to the dataset [15]. Another drawback is that a method for initial TF generation to shorten the design process is not available. Rarely, a limited number of predefined TFs are presented. Moreover, there is no systematical storage system for the optimized TF files; only an image-based history tool was implemented [16]. Also, the Digital Imaging and Communications in Medicine (DICOM) standard [17] has never been used to store the TF files. Finally, existing programs are not web enabled and not properly developed for client–server based applications thus limiting their use in teleradiology. The article presented is organized as follows: Programming properties, plug-in system, and Web based execution are explained in Section 2. The user interface design, the browser to store and access the TF files, and TF editing area are presented in Section 3. Section 4 explains the novel systematic storage system that uses DICOM standard for patient specific storage of 3D images via TF files. The histogram based TF generation algorithm to make an initial design of a TF for a new dataset prior to TF optimization is established in Section 5. Section 6 discusses the optimization of the design by taking into account the feedback from the users. Final discussions and future plans are given in Section 7. 2. Programming Plug-ins can be defined as a piece of software that can communicate and interact with a host application to provide additional functionality. In this context, the TFE is mainly developed to provide a flexible and highly interactive TF specification interface to the existing visualization programs. It is designed as a software module that can be plugged-into any 3D visualization program that supports Java interface. The “plugging” process is controlled with a simple procedure which consists of the creation of an instance of the TFE, followed by the call of the necessary methods (Fig. 1). A new instance of the TFE can be created either as an independent frame, which can be adjusted to any size and can be ```java public void showTFEditor() { // TFE initialization TFE tfe = new TFE seriess PANEL THIS, DIRECTORY_SETTINGS_TFE); tfe.setDataArray seriess.getSeriesHistogram(0); tfe.setDataRangeMin seriess.getHistogramMinX(0); tfe.setDataRangeMax seriess.getHistogramMaxX(0); tfe.setMaximum seriess.getHistogramMaxY(0); tfe.setSeriesInstanceUID seriess.getSeriesInstanceUID(0); panelId.setPredefinedTransferFunctions(tfe.getPresets seriess.getSeriesInstanceUID(0); tfe.calculateDefaultTransferFunction(); tfe.updateTransferFunction seriessPanel this.transformFunctionEditor.getActiveTF()); } ``` Fig. 1 – Java code for creating an instance of the TFE. located anywhere on the screen, or as a fixed panel which is embedded inside the GUI of the host visualization program. This selection is controlled by the first parameter of the constructor method. The second parameter is used to determine the path to locate the predefined TF files. Once the new instance of the TFE is created, ‘setDataArray’ method must be called to send the volumetric histogram data as the input. The other compulsory method to be called is the ‘setSeriesInstanceUID’ method (explained in detail in Section 4). The rest of the input methods (i.e. ‘setDataSourceMin’, ‘setDataSourceRangeMax’, and ‘setMaximum’) are optional and the TFE makes the necessary calculations from the histogram data if they are not called. The communication between the host visualization program and the TFE is also established with similar methods for sending the output data (i.e. the TF information that the host program receives). The output of the TFE uses two hashtables, one of which consists of the coordinates of the nodes that create the TF versus node colors while the other one consists of the coordinates of the nodes versus their opacity values. The Java class, which sends the output information, extends the native “Observable” class. Whenever the TF information is requested (by pressing “apply TF” button), the host program receives the TF information by calling the ‘getTransferFunction()’ method which in turn calls the native notifyObservers() method of Java and sends the hashtables to the host visualization program. The hashtables are designed in a format that can directly be used in Visualization Toolkit (VTK) [18], which is an open source and a widely used visualization software package. If necessary, the table formats can easily be changed to satisfy the requirements of any other software. To support these changes and future improvements of the developed software, a standard Javadoc documentation is also prepared. The software architecture of the TFE supports the simultaneous creation of multiple instances. The main advantage of this property is that if the host visualization program supports multiple visualizations in parallel, an instance of the TFE can be created for interaction with each visualization study. For medical imaging, platform-independent tools, which can easily be transferred and used on multiple platforms are necessary because of the heterogenous environments at medical centers. Since a major claim of the TFE is its ability to be ‘plugged’ to any visualization program, the implementation should be as independent of the operating system (OS) as possible. Application of the platform-independent programming language Java enables the creation of plug-in tools, which can easily extend the basic functions of the systems. Therefore Java is used in the implementation stage of the TFE. In this way TFE runs on almost any OS supporting Java. When 3D image preprocessing is conducted on an advanced workstation or server, it can considerably reduce the time necessary to achieve the same results on low-cost computers. With the recent advances in network and Internet technology, client–server based 3D image processing systems have become more efficient and popular. However, there is a demand for achieving maximum interactivity, even on low-cost computer side. From this point of view, another advantage of Java is that it can also be used for distributed applications within the global network, primarily for “on-the-fly” extension of the functionality of popular web browsers (i.e. Internet Explorer, Netscape Navigator, etc.). So, the applet version of the TFE, which is executed dynamically by a Java enabled web browser without a need for reinstallation, can be used for the client side of such a system. For example, when the main visualization program is running on a server, the users can use the Internet or any network protocol to execute the TFE and interact with 3D images. Being Java based, the TFE requires at least a Java Virtual Machine (JVM) 1.4, which needs approximately 100 MB. However if JVM is not installed, a local installation package of 40 MB which contains necessary library files of Java would be sufficient to execute the TFE. When the TFE is running, it requires 30 MB of main memory. The format of the TFE software package is a compressed Java archive (jar) file with a size of 500 KB. The TF files, which are used to store TF information (i.e. shape, color, and opacity), have considerably smaller size (1 KB at most). 3. Implementation of the GUI During the implementation of the GUI, the requirements needed for TF specification in medical environments are determined by getting feedback from the users (i.e. medical experts) [19]. The following constraints are considered in the design: (1) knowledge and experience level of the medical experts, (2) physical environment of the medical experts, (3) working style of the medical experts, (4) tasks to be performed by the system, and (5) problems that medical experts would like the system to solve. As a task analysis, a detailed list of functional specifications and user interface limitations has been prepared. Moreover, interviews with experts were conducted before and during the design process. Due to different display preferences of the medical experts (i.e. ordinary, multiple, or wide screen monitors), the TFE is designed so that the user can adjust it to any size that is found proper for interaction. TFE’s user interface consists of six main regions (Fig. 2). This includes the Title Bar, the Menu Bar, the Toolbar, the Browser, the Transfer Function Editing Panel (TF Panel), and the Status Bar. The Title bar provides the name and directory path information of the active TF file which is displayed on the TF panel. The Menu bar provides pull down menus for accessing the file system operations such as opening, closing, saving, and deleting TF files and folders (File Menu), for displaying labels, grids, texture, and logarithmic histogram (View Menu), and for providing information about using the Browser and the TF panel (Help Menu). The Toolbar displays pictorial representation of different functionalities for the Browser and the TF Panel. The Browser panel provides interactive access to the file system. The TF panel helps in the manipulation and design of the TFs; histogram of the volume data is also displayed on this panel. The Status bar gives information about the last changes made on the TF panel. The user can change the size of these GUI elements relative to each other (i.e. increasing the TF panel area by decreasing the Browser area), which helps to focus on the area that is being used at that moment. Moreover, the Toolbar can be removed from TFE or can be taken outside of the TFE for ease of use. 3.1. **TF Panel and TF Specification** The TF panel is designed to allow the user to easily manipulate TFs in terms of adding and removing nodes or changing their opacities and colors. A TF consists of two different control points: color and opacity nodes. Color nodes (circle shape) have color and opacity values. Opacity nodes (square shape) have only opacity values. Thus, a color node can be used to change both the color and opacity variations of voxels while an opacity node can only be used to change the opacity variation. Manipulating the TF in terms of adding or removing nodes can easily be done by using a pop-up menu that appears by right clicking over the histogram. By dragging the nodes, the user can change node positions and determine the shape of a TF. The most important feature of the TF panel is the efficient representation of color and opacity variation of the voxels via an easy-to-understand/use interface. Previously in [15], color and opacity variations are represented on different graphs, which lead to confusion due to the control of two functions instead of one. A disadvantage is that the user cannot see directly if a color will be visible or not (because of the unknown opacity value). In addition, the editing area for each graph is restricted due to the use of two different graphs. In the TFE, color and opacity change is represented at the same graph by using a color bar and by filling the shapes (circle or square) of the nodes with their own colors. When the user changes the opacity value of a node (by dragging it on y direction), the visibility of that node’s color changes, i.e. it becomes more opaque for larger y values and more transparent for smaller y values. This approach gives the user the ability to change opacity and color values with visual feedback. However it is not sufficient because the user is still unaware of the overall color variation. So, a colorbar is designed and placed at the bottom of the TF panel (Fig. 2). This colorbar shows which color corresponds to which Hounsfield (HU) value and visibility of that color. If a color is opaque, it can be seen clearly on the colorbar. As the color of an HU value changes to transparent, the background texture of the colorbar becomes more visible, warning the user of the reduced visibility of the corresponding color. By using the colorbar, the user can see the effects of changing the opacity and color of a node on the visualized image without doing the time consuming procedure of applying the TF to the dataset. At the background of the TF panel, a histogram plot is given to inform the user about the intensity of HU values in the volume. It should be noted that, the histogram does not help the user to see which part of the volume image will be affected by that color; instead, it gives important information on how many pixels will be affected due to a color/opacity change. 3.2. **The Toolbar and its Functions** To cover the shortcomings of the manual approach, several functions and options have been developed based on the experiences of medical experts. The Toolbar provides easy access to these functions, some of which are explained below: Some standard shapes (i.e. Step, Ramp, Triangle, and Pulse) are the basic forms of the TFs. Since starting with one of these shapes significantly reduces the design time, a toolbar option is provided to allow the user to insert one of these four default TFs. When many of the tissues lie in a very narrow range of HU values (i.e. soft tissues as white matter, gray matter, and CSF), it can be hard to put the nodes of a TF to the exact desired positions. Thus, another toolbar option provides a dialog box, by using which the user can change the HU or opacity value of a node directly by filling the corresponding fields with numerical values. Another option allows the user to adjust the range of the histogram by using the pointers shown in Fig. 2. is especially useful when dragging a node to an exact position but is difficult due to the need to deal with a range that consists of several nodes (Fig. 3). Usually the users want to change a part of the TF that consists of several nodes without changing their relative positions to each other. “Region of Interest (ROI)” option gives the opportunity of selecting the nodes in a ROI and moving/scaling them (Fig. 4). A TF can be applied to a volume by using the “Apply TF” button. The synchronous mode, which is designed for powerful computers, applies the TF automatically whenever the user changes a parameter such as dragging/inserting/removing a node or changing the color/opacity of a node. Another option allows the user to see the histogram and TFs in logarithmic scale. This option is useful in two cases: (1) when the histogram has strong peaks that suppress the visibility of the rest, the logarithmic view of histogram allows the user to see the histogram in detail; (2) when rendering the datasets with relatively large regions of uniform density, the resulting images are most sensitive to detailed changes in the TF when the opacity is set nearly to zero [20]. Editing the TF within the lowest 5–10% of the range mostly results in the most visible differences in the rendered images, allowing the user to differentiate finer structures in the data. Larger values correspond to renderings that appear nearly opaque, thus obscuring large portions of the present finer structures. However, for smaller or thinner regions, it is sometimes necessary to use higher opacity values to make the region to appear in the rendering. So, a better view can be obtained by scaling the vertical axis of the histogram graph, while the horizontal axis scale is fixed. If the transfer function opacities are scaled logarithmically, the result is a graph in which the regions of intensity are better distributed without pushing any of them off of the graph (Fig. 5). Consequently, it would be easier for a user to precisely control the intensity of a region in the volume. Finally, the undo button can be used to undo a manipulation step when necessary. 3.3. The TF file system and navigation with the browser As previously discussed, TF specification is a very time consuming and tedious task. During the TF specification process, a user iteratively explores a very large space of TF parameters. Moreover, the time to optimize a TF depends strongly on the experience of the user. Therefore, saving an optimized TF file (for a patient or a study) and using it later for similar cases can save time and increase efficiency. Also the TF files which were previously found useful, guide the users at finding an appropriate TF. To store TF information, TF files are created, which can be processed (i.e. opened, closed, copied, renamed, and deleted) directly by using the OS or the menu bar. When the TFE is first installed, a directory with the name ‘TF’ is created under the user’s working area. Preset TF files, which were previously designed for CT and MR, are located under this folder. The user can use these default TF files or can create new ones and save them. Without any navigation system, it will be hard to search for the previously saved TF files as the number of stored files increase by time. Moreover, the user should be able to see and reach the presets and previously saved TF files in parallel with editing. Taking into account these facts, a new browser is designed to access the file system and store the TF files systematically (Fig. 6). All TF file manipulations as well as OS operations can be done using the browser. If the OS operations are used for editing the files, the browser can be updated by using the browser-refresh button. Also at each execution of the TFE, the dynamic tree structure of the browser checks the changes in the directory and updates the browser automatically. The file access system provided by the browser helps the user on the systematical storage of TF files and, with this opportunity, the browser can also be used as a history tool. This is explained in more detail in Section 4. Moreover, since every logged in user automatically uses the directory that belongs to his/her account, he/she can store his/her personal TF files. Even when all users use the same account, they can create their own directories and store the TF files. 4. The representation and storage system As mentioned in Section 3.3, once a good TF is found for a study, it forms a good starting point for the same type of studies and can be optimized with little effort. Therefore, the storage and representation of previously optimized TF files are an important factor that can shorten the design process. Inspired by the Gray Scale Softcopy Presentation State (GSPS) for 2D images, the TFs are used to store and represent 3D images. GSPS objects are separate series in DICOM study in which source images are located only via references, i.e. they are not copied. This way, source images always remain unchanged and multiple presentation states can be applied to the same image. GSPS stores all relevant information about the visual representation of a DICOM image in a separate DICOM object. To describe how an image should be presented on a softcopy display, GSPS objects precisely define all necessary image processing steps. As GSPS objects are giving the possibility to store and distribute the presentation of a 2D image between softcopy devices efficiently, the same method can also be applied to the 3D images via appropriate TF files. There are two types of TF files for storage: “Global” and “Patient specific”. “Global” TF files (Fig. 6) are valid for all studies of the same kind (i.e. abdomen, head, and brain). The user can give any name to these TF files and can save them under any folder. "Patient specific" TF files (Fig. 6) are specially designed to be valid for only one DICOM series, which also means for only one patient. For 'Patient specific' saving, the main visualization program must, in addition to the histogram data, send the Series Instance Unique Identifier (SI-UID) (see Appendix A) of the visualized DICOM series to the TFE. This can be done by using the ‘setSeriesInstanceUID()’ method (Fig. 1) when creating an instance of the TFE. If the user selects 'Patient specific' saving, this SI-UID number is saved into the TF file. When the same DICOM series is visualized the next time, the TFE searches the TF directory for the folder with the same SI-UID number. If such a folder is found, the previously saved “Patient specific” TF files in that folder appear in yellow color at the bottom left corner of the 3D screen and can be applied to the volume data with a single mouse click only (Fig. 7). This storage system approach has three important advantages: (1) Adjustments performed on 3D medical data during diagnosis can be stored as TF files instead of saving the larger size 3D images to PACS, which will in fact only be a duplication. When a 3D image is reconstructed, this object can be used to apply a previously saved TF. (2) With this system, the user does not have to spend time to see the 3D images which have been previously found and used. Moreover, the system automatically warns the user by showing the previously saved ‘patient specific’ TF files with a yellow label next to the 3D image. (3) TFs are very small in size; therefore, they are inherently well suited to 3D teleradiology applications. For instance, the image series can be transferred for once at the beginning of the teleradiology session and only the TF files can be exchanged online. 5. TF initialization Automatic generation of an initial TF design is a very critical step, especially when dealing with a new dataset. For instance, an optimized TF for a 3D image obtained from an abdominal CT series is also useful for another series of abdominal CT images. At least, the existing TF provides a very good starting point and can be optimized with little effort. However if a new dataset is being visualized, it is difficult to start the design of a TF without any initial basis. Therefore, a semi-automatic TF generation algorithm is implemented in the TFE. The algorithm is histogram based and uses expert knowledge. ![Fig. 7 – Patient based storage of 3D images via TF files for the same dataset: (a) the TF optimized for heart and bones and (b) the TF optimized for lungs and airway trees. The labels at the bottom left warn the user about the TFs that are previously designed for the dataset.](image) In medical volume visualization, one advantage is that the structures to be visualized are known to exist in a specified range of gray values (i.e. HU values in a CT series). For example, in an abdominal CT series, the structures of interest are mostly the kidneys, aorta, and liver, but not the skin or fat. It is known that these tissues lie in a known (even roughly) range of HU values. In the developed algorithm, this knowledge is used. First, the user enters the HU range (in CT) or gray value range (in other modalities) for tissues of interest and then selects a color for each tissue of interest. Next, the histogram is smoothed with an averaging filter (Fig. 8a) and peaks are found by detecting the positive to negative crossings of the first derivative of the volume histogram. If a peak is inside the HU or gray value range of a tissue, then the range between the first negative and positive crossings before and after the peak of the derivative of the volume histogram is assigned to that tissue. The assigned tissue is represented using a trapezoid containing one color and three opacity nodes. When the assigned ranges of the two tissues overlap, then the last opacity node of the first tissue is placed at the intersection point with an opacity value of 0.3. If a peak is close to the specified range(s), it might suppress other peaks. Therefore if no peak is found in a specified range, a Gaussian, centered at the peak and with a variance equal to the peak variance, is fitted to the histogram (Fig. 8b). Then the difference between the histogram and fitted Gaussian is calculated. The same peak search is then applied to the residue signal and this goes on until a tissue is assigned within the selected range (Fig. 8c). Initially, the color variation of the initial TF is set starting from the color of the first tissue up to the color of the last tissue. Then the opacity values of all nodes are fixed to 0.3. Of course, the user can change the color variation and opacity values of the nodes to optimize the TF. The results for a sample dataset are presented in Fig. 9. The visualized image series are taken from an abdominal CT image series. The tissues of interest are selected to be the kidneys, bones, and aorta. The initial TF and corresponding rendering results can be seen in Fig. 9a and b. Fig. 9c shows the TF optimized by an expert and Fig. 9d shows the results of rendering. Although there is a clear difference between the initial TF and optimized TF, the initial TF design provides a good starting point for the expert. The proposed method can be used for images derived by all modalities. However, it is most effective in CT volumes because of the well known HU values of the tissues. In other modalities, knowledge and experience of the expert is critical since the gray value range of a tissue should be known to properly design the initial TF. By using this approach, several TF files for different datasets, including CT Abdomen, CT Lung, CT Aorta, CT Neck, CT Head, and MR Brain, have been prepared and optimized (Fig. 6). These TF files provide good basis of initial TFs and thus included in the software of the presented system. 6. User evaluation For testing the TFE, it is plugged into exploreDICOM [21], which is a dedicated medical image viewer. Twelve medical experts are asked to test the TFE and fill an evaluation form. These experts are affiliated with Dokuz Eylul University Medicine Faculty Radiology Department, where a complete digital radiology system environment (i.e. Radiology Information System (RIS), Picture Archiving and Communicating System (PACS), diagnostic/clinical workstations, and web viewers) has been in use for several years. 3D visualization programs are routinely in use for diagnosis and treatment planning in the department. Especially, medical experts who work on the images of the brain and abdominal regions use the 3D technology more frequently. It is determined that the experts joining this study are using 3D visualization from 1 year up to 7 years with a frequency of 5–12 times a month. Twelve questions are asked under four headings and grading was from 1 to 5 where 1 is the best. The views of the experts and the average evaluation value (AVV) for each heading are as follows: 1. **GUI and ease of use**: The experts find the TFE GUI elements easy to use and understand (AVV:2). 2. **TF Panel and specification properties**: The menu options are found to be sufficient. However it is pointed out by the experts that there is a strong need for an information panel which interactively shows the HU values of the tissues. An example of this information panel may be a “mouse listener” which shows the HU/gray value of a pixel that the mouse cursor points on. Such information is indicated to be necessary to give a coarse idea to the user on where to locate the high opacities to visualize the tissue of interest. The visual feedback and manipulation properties of the TF are found to be acceptable (AVV:2). 3. **The Browser and the file system**: The usability of the Browser, TF file format, and its properties such as the history tool are found out to be very helpful. Default presets for different studies and modality types are found to be properly designed; however, small adjustments are still required as the imaging modality is changed (AVV:1). 4. **Patient based storage and representation**: The approach is found out to be very useful. Nevertheless, although the patient based storage of the TF files is found to be very effective, the experts prefer a system that stores TF files to the PACS, not to the computer where the software is running (AVV:2). These evaluation forms show that the initial results are very promising and optimistic for the TFE. Feedbacks from medical experts show that the TFE is very useful for interaction with visualizations and for producing informative images especially for CT and angiographic datasets. It is pointed by the experts that it would be more efficient to use the TFE after a segmentation process which eliminates the unnecessary information from the data, especially in MR series where soft tissues are overlapping. For instance, after segmenting the liver from other abdominal organs, it is possible to classify liver tissue, vessels, and tumors by assigning different color and opacities using transfer functions [22]. Another test is made to measure the time needed by both inexperienced and experienced users to define an accurate TF. For this test, three experienced and six inexperienced users are presented with the following three different studies: CT Skull (easy to classify), CT Abdomen (hard to classify), and MR Brain (very hard to classify). These studies are selected by the experts due to their different levels of complexities. The organs of interest, the time needed to classify the studies for the first time with and without proposed TF generation method, and the time needed to classify the same type of studies after the first time, are presented in Table 1. The results are calculated by taking the average time of group members in each category. 7. Results and discussion A transfer function editor (TFE) has been developed for medical volume visualization. Limitations of the TF specification, especially spatial information drawback, have been overcome by taking into account of HU values and expert knowledge, and using a histogram based method. A user friendly and easy to use GUI has been implemented for fast adaptation to the software. A file type has been implemented that can be used for OS and teleradiology applications and for the storage of 3D medical images using the DICOM standard. A history tool has been constructed to access these files. In addition, the extended TF design functionalities of the developed software ease the TF design process. The results show that the proposed method decreases the time needed to find an accurate TF by almost a half in CT studies. It saves less time in MR studies because there is no standard measure like HU values used in CT studies. Therefore, the users give approximate gray value ranges for the tissues as the priori information. It can also be observed from the results that once a proper TF is found for a study, recalling it by using the TFE for the same type of studies would reduce the optimization time significantly. The flexible and platform independent design of the TFE allows the users with different display preferences, distinct experience levels, and with different hardware platforms to use the program easily. The new method of initial TF generation shortens the design process and provides more guidance than including only the presets as the current editors do. DICOM based TF file and 3D image storage system is novel and less time consuming than other image-based history tools. In conclusion, the developed TFE is found to be helpful and usable in clinical 3D visualization analysis. It is currently in use in DEU Radiology Department especially for classify- Table 1: Times needed by inexperienced and experienced users to define an accurate TF <table> <thead> <tr> <th></th> <th>CT skull (bones, skin)</th> <th>CT abdomen (kidneys, aorta, liver)</th> <th>MR Brain (white matter, gray matter, CSF)</th> </tr> </thead> <tbody> <tr> <td>First time without</td> <td>First time with</td> <td>After the first time</td> <td></td> </tr> <tr> <td>proposed method</td> <td>proposed method</td> <td>(min)</td> <td>(min)</td> </tr> <tr> <td>First time without</td> <td>First time with</td> <td>After the first time</td> <td></td> </tr> <tr> <td>proposed method</td> <td>proposed method</td> <td>(min)</td> <td>(min)</td> </tr> <tr> <td>First time without</td> <td>First time with</td> <td>After the first time</td> <td></td> </tr> <tr> <td>proposed method</td> <td>proposed method</td> <td>(min)</td> <td>(min)</td> </tr> <tr> <td>Experienced user</td> <td>&lt; 5</td> <td>&lt; 25</td> <td></td> </tr> <tr> <td>Inexperienced user</td> <td>&lt; 10</td> <td>&lt; 22</td> <td></td> </tr> <tr> <td></td> <td>&lt; 15</td> <td>&lt; 22</td> <td></td> </tr> </tbody> </table> The presented system can further be improved by adding more features aimed at guiding the clinician without limiting user interaction. Acknowledgements The authors would like to thank Prof. Dr. Öğuz Dicle and the Dokuz Eylül University Radiology Department for their contributions on this study. The authors would like to thank the reviewers for their valuable critics for the improvement of this paper. Appendix A DICOM Information Object Definitions (IODs), which specify the attributes for each object, are description of entities to which the standard refers. These entities include objects such as Patient and Study. An instance of an IOD refers to an actual object of the corresponding class. Attributes of an IOD are divided into three classes, which specify whether or not their presence is required in every instance of an IOD. Some attributes are compulsory, some are desired, and some are optional. Unique Identifiers (UIDs), which are long numeric strings, are used to identify instances and definitions. Each IOD and attribute has its own UID. In the present study, Series Instance UID, which is a compulsory attribute, is used as the identifier. References [10] J. Kniss, G. Kindlmann, C. Hansen, Interactive volume rendering using multi-dimensional transfer functions and [20] S. Potts, T. Møller, Transfer functions on a logarithmic scale for volume rendering, Graphics, Usability and Visualization (GrUVi) Lab, School of Computing Science, Simon Fraser University.
{"Source-Url": "http://kisi.deu.edu.tr/alper.selver/papers/cmpb_tfe.pdf", "len_cl100k_base": 8625, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 29205, "total-output-tokens": 10053, "length": "2e13", "weborganizer": {"__label__adult": 0.0010623931884765625, "__label__art_design": 0.0021209716796875, "__label__crime_law": 0.0009226799011230468, "__label__education_jobs": 0.0042266845703125, "__label__entertainment": 0.00017595291137695312, "__label__fashion_beauty": 0.0006256103515625, "__label__finance_business": 0.0004963874816894531, "__label__food_dining": 0.0011444091796875, "__label__games": 0.001312255859375, "__label__hardware": 0.006526947021484375, "__label__health": 0.051422119140625, "__label__history": 0.0007266998291015625, "__label__home_hobbies": 0.00031638145446777344, "__label__industrial": 0.00112152099609375, "__label__literature": 0.0004496574401855469, "__label__politics": 0.0003638267517089844, "__label__religion": 0.0012903213500976562, "__label__science_tech": 0.419921875, "__label__social_life": 0.00019669532775878904, "__label__software": 0.063232421875, "__label__software_dev": 0.43994140625, "__label__sports_fitness": 0.00107574462890625, "__label__transportation": 0.0008783340454101562, "__label__travel": 0.0004532337188720703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44724, 0.03038]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44724, 0.52292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44724, 0.9058]], "google_gemma-3-12b-it_contains_pii": [[0, 3731, false], [3731, 10359, null], [10359, 17151, null], [17151, 21079, null], [21079, 24551, null], [24551, 28209, null], [28209, 33017, null], [33017, 35595, null], [35595, 38658, null], [38658, 42938, null], [42938, 44724, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3731, true], [3731, 10359, null], [10359, 17151, null], [17151, 21079, null], [21079, 24551, null], [24551, 28209, null], [28209, 33017, null], [33017, 35595, null], [35595, 38658, null], [38658, 42938, null], [42938, 44724, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44724, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44724, null]], "pdf_page_numbers": [[0, 3731, 1], [3731, 10359, 2], [10359, 17151, 3], [17151, 21079, 4], [21079, 24551, 5], [24551, 28209, 6], [28209, 33017, 7], [33017, 35595, 8], [35595, 38658, 9], [38658, 42938, 10], [42938, 44724, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44724, 0.07383]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3b3b8c914ab07df0748d72623db88eafd0508ac1
To Mock or Not To Mock? An Empirical Study on Mocking Practices Spadini, Davide; Aniche, Maurício; Bruntink, Magiel; Bacchelli, Alberto DOI 10.1109/MSR.2017.61 Publication date 2017 Document Version Peer reviewed version Published in Proceedings - 2017 IEEE/ACM 14th International Conference on Mining Software Repositories, MSR 2017 Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Copyright Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim. To Mock or Not To Mock? An Empirical Study on Mocking Practices Davide Spadini∗†, Maurício Aniche†, Magiel Bruntink∗, Alberto Bacchelli† ∗Software Improvement Group †Delft University of Technology {d.spadini, m.bruntink}@sig.eu {d.spadini, m.f.aniche, a.bacchelli}@tudelft.nl Abstract—When writing automated unit tests, developers often deal with software artifacts that have several dependencies. In these cases, one has the possibility of either instantiating the dependencies or using mock objects to simulate the dependencies’ expected behavior. Even though recent quantitative studies showed that mock objects are widely used in OSS projects, scientific knowledge is still lacking on how and why practitioners use mocks. Such a knowledge is fundamental to guide further research on this widespread practice and inform the design of tools and processes to improve it. The objective of this paper is to increase our understanding of which test dependencies developers (do not) mock and why, as well as what challenges developers face with this practice. To this aim, we create MOCKEXTRACTOR, a tool to mine the usage of mock objects in testing code and employ it to collect data from three OSS projects and one industrial system. Sampling from this data, we manually analyze how more than 2,000 test dependencies are treated. Subsequently, we discuss our findings with developers from these systems, identifying practices, rationales, and challenges. These results are supported by a structured survey with more than 100 professionals. The study reveals that the usage of mocks is highly dependent on the responsibility and the architectural concern of the class. Developers report to frequently mock dependencies that make testing difficult and prefer to not mock classes that encapsulate domain concepts/rules of the system. Among the key challenges, developers report that maintaining the behavior of the mock compatible with the behavior of original class is hard and that mocking increases the coupling between the test and the production code. I. INTRODUCTION In software testing, it is common that the software artifact under test depends on other units [36]. Therefore, when testing a unit (e.g., a class in object-oriented programming), developers often need to decide whether to test the unit and all its dependencies together (similar to integration testing) or to simulate these dependencies and test that unit in isolation. By testing all dependencies together, developers gain realism: The test will more likely reflect the behavior in production [41]. However, some dependencies, such as databases and web services, may (1) slow the execution of the test [31], (2) be costly to properly setup for testing [37], and (3) require testers to have full control over such external dependencies [18]. By simulating its dependencies, developers gain focus: The test will cover only the specific unit and the expected interactions with its dependencies; moreover, inefficiencies of testing dependencies are mitigated. To support the simulation of dependencies, mocking frameworks have been developed (e.g., Mockito [7], EasyMock [2], and JMock [3] for Java, Mock [5] and Mocker [6] for Python), which provide APIs for creating mock (i.e., simulated) objects, setting return values of methods in the mock objects, and checking interactions between the component under test and the mock objects. Past research has reported that software projects are using mocking frameworks widely [21] [32] and has provided initial evidence that using a mock object can ease the process of unit testing [29]. However, empirical knowledge is still lacking on how and why practitioners use mocks. To scientifically evaluate mocking and its effects, as well as to help practitioners in their software testing phase, one has to first understand and quantify developers’ practices and perspectives. In fact, this allows both to focus future research on the most relevant aspects of mocking and on real developers’ needs, as well as to effectively guide the design of tools and processes. To fill this gap of knowledge, the goal of this paper is to empirically understand how and why developers apply mock objects in their test suites. To this aim, we analyzed more than 2,000 test dependencies from three OSS projects and one industrial system. We then interviewed developers from these systems to understand why some dependencies were mocked and others were not. We challenged and supported our findings by surveying 105 developers from software testing communities. Finally, we discussed our findings with a main developer from the most used Java mocking framework. The main contributions of this paper are: 1) A categorization of the most often mocked and not mocked dependencies, based on a quantitative analysis on three OSS systems and one industrial system (RQ1). 2) An empirical understanding of why and when developers mock, after interviewing developers of analyzed systems and surveying 105 developers (RQ2). 3) The main challenges faced by developers when making use of mock objects in the test suites, also extracted from the interviews and surveys (RQ3). 4) An open source tool, namely MOCKEXTRACTOR, that is able to extract the set of mocked and non mocked dependencies in a given Java test suite. The tool is available in our on-line appendix [12] and GitHub. II. BACKGROUND “Once,” said the Mock Turtle at last, with a deep sigh, “I was a real Turtle.” — Alice In Wonderland, Lewis Carroll A. Mock objects Mock objects are used to replace real software dependencies by simulating their relevant features [28]. Typically, methods of mock objects are designed to return some desired values given specific input values. Listing II-A shows an example usage of Mockito, one of the most popular mocking libraries in Java [32]. We now explain each code block of the example: 1) At the beginning, one must define the class that should be mocked by Mockito. In our example, LinkedList is being mocked (line 2). The returned object (mockedList) is now a mock: It can respond to all existing methods in the LinkedList class. 2) As second step, we provide a new behaviour to the newly instantiated mock. In the example, we inform the mock to return the string ‘first’ when mockedList.get(0) is invoked (line 5) and to throw a RuntimeException on mockedList.get(1) (line 7). 3) The mock is now ready to be used. In line 10 and 11 the mock will answer method invocations with the values provided in step 2. ```java 1. // 1: Mocking LinkedList 2. LinkedList mockObj = mock(LinkedList.class); 3. // 2: Instructing the mock object behaviour 4. when(mockObj.get(0)).thenReturn("first"); 5. when(mockObj.get(1)).thenThrow(new RuntimeException()); 6. // 3: Invoking methods in the mock 7. System.out.println(mockObj.get(0)); 8. System.out.println(mockObj.get(1)); ``` Listing 1: Example of an object being mocked Overall, whenever developers do not want to rely on the real implementation of a dependency, (e.g., to isolate a unit test) they can simulate it and define the expected behavior using the aforementioned approach. B. Motivating example Sonarqube is a popular open source system that provides continuous code inspection [10]. In January of 2017, Sonarqube contained over 5,500 classes, 700k lines of code, and 2,034 test units. Among all test units, 652 make use of mock objects, mocking a total of 1,411 unique dependencies. Let us consider the class IssueChangeDao as an example. This class is responsible for accessing the database regarding changes in issues (changes and issues are business entities of the system). To that end, this class uses MyBatis [8], a Java library for accessing databases. There are four test units that use IssueChangeDao. The dependency is mocked in two of them; in the other two, the test creates a concrete instance of the database (to access the database during the test execution). Why do developers mock the dependency in some cases and do not mock in other cases? Indeed, this is a key question motivating this work. After manually analyzing these tests, we observed that: - In Test 1, the class is concretely instantiated as this test unit performs an integration test with one of their web services. As the test exercises the web service, a database needs to be active. - In Test 2, the class is also concretely instantiated as IssueChangeDao is the class under test. - In both Test 3 and Test 4, test units focus on testing two different classes that use IssueChangeDao as part of their job. This single example shows us that developers may have different reasons to mock or not mock a class. In the remainder of this paper, we investigate patterns of how developers mock by analyzing the use of mocks in software systems and we investigate their rationale by interviewing and surveying practitioners on their mocking practices. III. RESEARCH METHODOLOGY The goal of our study is to understand how and why developers apply mock objects in their test suites. To that end, we conduct quantitative and qualitative research focusing on four software systems and address the following questions: **RQ1:** What test dependencies do developers mock? When writing an automated test for a given class, developers can either mock or use a concrete instance of its dependencies. Different authors [28], [17] affirm that mock objects can be used when a class depends upon some infrastructure (e.g., file system, caching). We aim to identify what dependencies developers mock by means of manual analysis in source code from different systems. **RQ2:** Why do developers decide to (not) mock specific dependencies? We aim to find an explanation to the findings in previous RQ. We interview developers from the analyzed systems and ask for an explanation on why some dependencies are mocked while others are not. Furthermore, we survey software developers with the goal of challenging the findings from the interviews. **RQ3:** Which are the main challenges experienced with testing using mocks? Understanding challenges sheds a light on important aspects on which researchers and practitioners can effectively focus next. Therefore, we investigate the main challenges developers face using mocks by means of interviews and surveys. A. Sample selection We focus on projects that routinely use mock objects. We analyze projects that make use of Mockito, the most popular framework in Java with OSS projects [32]. We select three open source software projects (i.e., Sonarqube [10], Spring [11], VRaptor [13]) and a software system from an industrial organization we previously collaborated with (Alura [1]); Table I details their size. In the following, we describe their suitability to our investigation. Spring Framework. Spring provides an extensive infrastructural support for Java developers; its core serves as a base for many other offered services, such as dependency injection and transaction management. SonarQube. SonarQube is a quality management platform that continuously measures the quality of source code and delivers reports to its developers. VRaptor. VRaptor is an MVC framework that provides an easy way to integrate Java EE capabilities (such as CDI) and to develop REST webservices. Alura. Alura is a proprietary web e-learning system used by thousands of students and teachers in Brazil; it is a database-centric system developed in Java. B. Data Collection and Analysis The research method we used to answer our research questions follows a mixed qualitative and quantitative approach, which we depict in Figure 1: (1) We automatically collected all mocked and non-mocked dependencies in the test units of the analyzed systems, (2) we manually analyzed a sample of these dependencies with the goal of understanding their architectural concerns as well as their implementation, (3) we grouped these architectural concerns into categories, which enabled us to compare mocked and non mocked dependencies among these categories, (4) we interviewed developers from the studied systems to understand our findings, and (5) we enhanced our results in an on-line survey with 105 respondents. 1. Data collection. To obtain data on mocking practices, we first collected all the dependencies in the test units of our systems performing static analysis on their test code. To this aim, we created the tool MOCKEXTRACTOR [38], which implements the algorithm below: 1) We detect all test classes in the software system. As done in past literature (e.g., Zaidman et al. [43]), we consider a class to be a test when its name ends with ‘Test’ or ‘Tests’. 2) For each test class, we extract the (possibly extensive) list of all its dependencies. Examples of dependencies are the class under test itself, its required dependencies, and utility classes (e.g., lists and test helpers). 3) We mark each dependency as ‘mocked’ or ‘not mocked’. Mockito provides two APIs for creating a mock from a given class: (1) By making use of the @Mock annotation in a class field or (2) by invoking Mockito.mock() inside ``` 3Mockito can also generate spies which are out of the scope of this paper. More information can be found at Mockito’s documentation: http://bit.ly/2kjtEif6. ``` the test method. Every time one of the two options is found in the code, we identify the type of the class that is mocked. The class is then marked as ‘mocked’ in that test unit. If a dependency appears more than once in the test unit, we consider it ‘mocked’. A dependency may be considered ‘mocked’ in one test unit, but ‘not mocked’ in another. 4) We mark dependencies as ‘not mocked’ by subtracting the mocked dependencies from the set of all dependencies. 2. Manual analysis. To answer what test dependencies developers mock, we analyzed the previously extracted mocked and non mocked dependencies. The goal of the analysis is to understand the main concern of the class in the architecture of the software system (e.g., a class is responsible for representing a business entity, or a class is responsible for persisting into the database). Defining the architectural concern of a class is not an easy task to be automated, since it is context-specific [12], thus we decided to perform a manual analysis. The first two authors of the paper conducted this analysis after having studied the architecture of the four systems. Due to the size of the total number of mocked and non mocked dependencies (~38,000), we analyzed a random sample. The sample is created with the confidence level of 95% and the error (E) of 5%, i.e., if in the sample a specific dependency is mocked f% of the times, we are 95% confident that it will be mocked f% ± 5% in the entire test suite. Since projects belong to different areas and results can be completely different from each other, we created a sample for each project. We produced four samples, one belonging to each project. This gave us fine-grained information to investigate mock practices within each project. In Table I we show the final number of analyzed dependencies (844 + 1, 334 = 2, 178 dependencies). The manual analysis procedure was as follows: - Each researcher was in charge of two projects. The selection was done by convenience: The second author was already familiar with the internal structure of VRaptor and Alura. - All dependencies in the sample were listed in a spreadsheet in which both researchers had access. Each row contained information about the test unit that dependency was found and a boolean indicating if that dependency was mocked. - For each dependency in the sample, the researcher manually inspected the source code of the class. To fully understand the class’ architectural concern, researchers were allowed to navigate through any other relevant piece of code. - After understanding the concern of that class, the researcher filled the “Category” column with what best describes the Data collection Spring Framework D1 Lead Developer 25 VRaptor D2 Developer 10 Alura D3 Lead Developer 5 TABLE II: Profile of the interviewees <table> <thead> <tr> <th>Project</th> <th>ID</th> <th>Role in the project</th> <th>Years of programming experience</th> </tr> </thead> <tbody> <tr> <td>Spring Framework</td> <td>D1</td> <td>Lead Developer</td> <td>25</td> </tr> <tr> <td>VRaptor</td> <td>D2</td> <td>Developer</td> <td>10</td> </tr> <tr> <td>Alura</td> <td>D3</td> <td>Lead Developer</td> <td>5</td> </tr> </tbody> </table> 4. Interviews. We used results from the previous RQ as an input to the data collection procedure of RQ2. We designed an interview in which the goal was to understand why developers did mock some roles and did not mock other roles. The interview was semi-structured and was conducted by the first two authors of this paper. For each finding in previous RQ, we made sure that the interviewee described why they do (or do not) mock that particular category, what the perceived advantages and disadvantages are, and any exceptions to this rule. Our full interview protocol is available in the appendix [12]. We conducted 3 interviews with active, prolific developers from 3 projects (unfortunately no developer from Sonarqube was available for an interview). Table II shows the interviewees’ details. We started each interview by asking general questions about mocking practices. More specifically, we were interested in understanding why and what classes they commonly mock. Afterwards, we focused on the results gathered by answering the previous RQ. We presented the interviewee with two tables: one containing the results of all projects (Figure 2) and another one containing only the results of the interviewee’s concern. No categories were defined up-front. In case of doubt, the researcher first read the test unit code; if not enough, he then talked with the other research. - At the end of each day, the researchers discussed together their main findings and some specific cases. The full process took seven full days. The total number of categories was 116. We then started the second phase of the manual analysis, focused on merging categories. 3. Categorization. To group similar categories we used a technique similar to card sort [35]: (1) each category represented a card, (2) the first two authors analyzed the cards applying open (i.e., without predefined groups) card sort, (3) the researcher who created the category explained the reasons behind it and discussed a possible generalization (making the discussion more concrete by showing the source code of the class was allowed during the discussion), (4) similar categories were then grouped into a final, higher level category. (5) at the end the authors gave a name to each final category. After following this procedure for all the 116 categories, we obtained a total of 7 categories that describe the concerns of classes. The large difference between 116 and 7 is the result of most concerns being grouped into two categories: ‘Domain object’ and ‘External dependencies’. The former classes always represented some business logic of the system and had no external dependencies. The full list of the 116 categories is available in our on-line appendix [12]. ![Fig. 1: The mixed approach research method applied.](image-url) project. For each category, we presented the findings and solicit an interpretation (e.g., by explaining why it happens in their specific project and by comparing with what we saw in other projects). From a high-level perspective, we asked: 1) Can you explain this difference? Please, think about your experience with this project in particular. 2) We observe that your numbers are different when compared to other projects. In your opinion, why does it happen? 3) In your experience, when should one mock a <<category>>? Why? 4) In your experience, when should one not mock a <<category>>? Why? 5) Are there exceptions? 6) Do you know if your rules are also followed by the other developers in your project? Throughout the interview, one of the researchers was in charge of summarizing the answers. Before finalizing the interview, we revisited the answers with the interviewee to validate our interpretation of their opinions. Finally, we asked questions about challenges with mocking. Interviews were conducted via Skype and fully recorded. Each of them was manually transcribed by the researchers. With the full transcriptions, we performed card sorting [39], [20] to identify the main themes. As a complement to the research question, whenever feasible, we also validated interviewees’ perceptions by measuring them in their own software system. 5. Survey. To challenge and expand the concepts that emerged during the previous phases, we conducted a survey. All questions were derived from the results of previous RQs. The survey had four main parts. In the first part, we asked participants about their experience in software development and mocking. The second part of the survey asked participants about how often they make use of mock objects in each of the categories found during the manual analysis. The third part asked participants about how often they mock classes in specific situations, such as when the class is too complex or coupled. The fourth part was focused on asking participants about challenges with mocking. Except for this last question, which was open-ended and optional, all the other questions were closed-ended and participants had to choose between a 5-point Likert scale. The survey was initially designed in English. We compiled Brazilian Portuguese translation, to reach a broader, more diverse population. Before deploying the survey, we first performed a pilot of both versions with four participants; we improved our survey based on their feedbacks (changes were all related to phrasing). We then shared our survey via Twitter (authors tweeted in their respective accounts), among our contacts, and in developers’ mailing lists. The survey ran for one week. We analyzed the open questions by also performing card sorting. The full survey can be found in our on-line appendix [12]. We received a total of 105 answers from both Brazilian Portuguese and English surveys. 21% of the respondents have between 1 and 5 years of experience, 64% between 6 and 15 and 15% have more than 15 years of experience. The most used programming language is Java (24%), the second is JavaScript (19%) and the third one is C# (18%). The mocking framework most used by the respondents is Mockito (33%) followed by Moq (19%) and Powermock (5%). C. Threats to Validity Our methodology may pose some threats to the validity of the results we report in Section IV. We discuss them here. 1) Construct validity: Threats to construct validity concern our research instruments. We develop and use MOCKTRACTOR to collect dependencies that are mocked in a test unit by means of static code analysis. As with any static code analysis tool, MOCKTRACTOR is not able to capture dynamic behavior (e.g., mock instances that are generated in helper classes and passed to the test unit). In these cases, the dependency would have been considered “non mocked”. We mitigate this issue by (1) making use of a large random samples in our manual analysis, and (2) manually inspecting the results of MOCKTRACTOR in 100 test units, in which we observed that such cases never occurred, thus giving us confidence regarding the reliability of our data set. As each class is manually analyzed by only a single researcher and there could be divergent opinions despite the aforementioned discussion, we measured their agreement. Each researcher analyzed 25 instances that were made by the other researcher in both of his two projects, totaling 100 validated instances as seen in Figure 1, Point 2. The final agreement on the 7 categories was 89%. 2) Internal validity: Threats to internal validity concern factors we did not consider that could affect the variables and the relations being investigated. In our study, we interview developers from the studied software to understand why certain dependencies are mocked and not mocked. Clearly, a single developer does not know all the implementation decisions in a software system. We mitigate this issue by (1) showing the data collected in RQ1 first and (2) not asking questions about the overall categories that we manually coined in RQ1. In addition, their opinions may also be influenced by other factors, such as current literature on mocking (which could may have led them to social desirability bias [33]) or other projects that they participate in. To mitigate this issue, we constantly reminded interviewees that we were discussing the mocking practices specifically of their project. At the end of the interview, we asked them to freely talk about their ideas on mocking in general. 3) External validity: Threats to external validity concern the generalization of results. Our sample contains four Java systems (one of them closed source), which is small compared to the overall population of software systems that make use of mocking. We reduce this issue by collecting the opinion of 105 developers from a variety of projects about our findings. Further research in different projects in different programming languages should be conducted. Furthermore, we do not know the nature of the population that responded to our survey, hence it might suffer from a self- selection bias. We cannot calculate the response rate of our survey; however, from the responses we see a general diversity in terms of software development experience that appears to match in our target population. IV. RESULTS In this section, we present the results to our research questions aimed at understanding how and why developers apply mock objects in their test suites, as well as which challenges they face in this context. RQ1. What test dependencies do developers mock? As we show in Table I, we analyzed 4,419 test units of which 1,122 (25.39%) contain at least one mock object. From the 38,313 collected dependencies from all test units, 35,745 (93.29%) are not mocked while 2,568 (6.71%) are mocked. As the same dependency may appear more than once in our dataset (i.e., a class can appear in multiple test units), we calculated the unique dependencies in our dataset. We obtained a total of 11,824 not mocked and 938 mocked dependencies. Interestingly, the intersection of these two sets reveals that 650 dependencies (70% of all dependencies mocked at least once) were both mocked and not mocked in the test suite. In Figure 2, we show how often each role is mocked in our sample in each of the seven categories found during our manual analysis. One may note that “databases” and “web services” can also fit in the “external dependency” category; we separate these two categories as they appeared more frequently than other types of external dependencies. In the following, we explain each category: - **Domain object**: Classes that contain the (business) rules of the system. Most of these classes usually depend on other domain objects. They do not depend on any external resources. The definition of this category fits well to the definition of Domain Object [15] and Domain Logic [16] architectural layers. Examples are entities, services and utility classes. - **Database**: Classes that interact with an external database. These classes can be either an external library (such as Java SQL, JDBC, Hibernate, or ElasticSearch APIs) or a class that depends on such external libraries (e.g., an implementation of the Data Access Object [16] pattern). - **Native Java libraries**: Libraries that are part of the Java itself. Examples are classes from Java I/O and Java Util classes (Date, Calendar). - **Web Service**: Classes that perform some HTTP action. As with the database category, this dependency can be either an external library (such as Java HTTP) or a class that depends on such library. - **External dependency**: Libraries (or classes that make use of libraries) that are external to the current project. Examples are Jetty and Ruby runtimes, JSON parsing libraries (such as GSON), e-mail libraries, etc. - **Test support**: Classes that support testing itself. Examples are fake domain objects, test data builders and web services for tests. ![Fig. 2: How often each architectural role is mocked and not mocked in analyzed systems (N = 2,178)](image) - **Unresolved**: Dependencies that we were not able to solve. For example, classes belonging to a sub-module of the project which the source code is not available. Numbers are quite similar when we look at each project separately. Exceptions are for databases (Alura and SonarQube mock ~60% of databases dependencies, Spring mocks 94%) and domain objects (while other projects mock them ~30% of times, SonarQube mocks 47%). We present the numbers for each project in our online appendix [12]. We observe that Web Services and Databases are the most mocked dependencies. On the other hand, there is no clear trend in Domain objects: numbers show that 36% of them are mocked. Even though the findings are aligned with the technical literature [28], [23], further investigation is necessary to understand the real rationale behind the results. In contrast Test support and Java libraries are almost never mocked. The former is unsurprising since the category includes fake classes or classes that are created to support the test itself. RQ2. Why do developers decide to (not) mock specific dependencies? In this section, we summarize the answers obtained during our interviews and surveys. We refer to the interviewees by their ID in Table II. **Mocks are often used when the concrete implementation is not simple.** All interviewees agree that certain dependencies are easier to mock than to use their concrete implementation. They mentioned that classes that are highly coupled, complex to set up, contain complex code, perform a slow task, or depend on external resources (e.g., databases, web services or external libraries) are candidates to be mocked. - **D2** gives a concrete example: “It is simpler to set up a in-memory list with elements than inserting data into the database.” Interviewees affirmed that whenever they can completely control the input and output of a class, they prefer to instantiate the concrete implementation of the class rather than mocking it. As D1 stated: “if given an input [the production class] will always return a single output, we do not mock it.” In Figure 3, we see that survey respondents also often mock dependencies with such characteristics: 48% of respondents said they always or almost always mock classes that are highly coupled, and 45.5% when the class difficult to set up. Contrarily to our interviewees, survey respondents report to mock less often when it comes to slow or complex classes (50.4% and 34.5% of respondents affirm to never or almost never mock in such situations, respectively). **Mocks are not used when the focus of the test is the integration.** Interviewees explained that they do not use mocks when they want to test the integration with an external dependency itself, (e.g., a class that integrates with a database). In these cases they prefer to perform a real interaction between the unit under test and the external dependency. D1 said “if we mock [the integration], then we wouldn’t know if it actually works. [...] I do not mock when I want to test the database itself; I wanna make sure that my SQL works. Other than that, we mock.” This is also confirmed in our survey (Figure 3), as our respondents also almost never mock the class under test. The opposite scenario is when developers want to **unit test a class that depends on a class that deals with external resources,** (e.g., a class that integrates with a database). In this case, developers want to test a single unit without the influence of the external dependencies, thus developers evaluate whether they should mock that dependency. D2 said: “in unit testing, when the unit I wanna test uses classes that integrate with the external environment, we do not want to test if the integration works, but if our current unit works, [...] so we mock the dependencies.” **Interfaces are mocked rather than one of their specific implementations.** Interviewees agree that they often mock interfaces. They explain that an interface can have several implementations and they prefer to use a mock to not rely on a specific one. D1 said: “when I test operations with side effects [sending an email, doing a HTTP Request] I create an interface that represents the side effect and [instead of using a specific implementation] I mock directly the interface.” **Domain objects are usually not mocked.** According to the interviewees, domain objects are often plain old Java objects, commonly composed by a set of attributes, getters and setters. These classes also commonly do not deal with external resources. Thus, these classes tend to be easily instantiated and set up. However, if a domain object is complex (i.e., contains complicated business logic or not easy to set up), developers may mock them. Interviewee D2 says: “[if class A depends on the domain object B] I’d probably have a BTest testing B so this is a green light for me to know that I don’t need to test B again.” All interviewees also mention that the same rule applies if the domain object is highly coupled. **Figure 4** shows that answers about mocking **Domain objects** vary. Interestingly, there is a slight trend towards not mocking them, in line to our findings during the interviews and in RQ1. **Native Java objects and libraries are usually not mocked.** According to D1, native Java objects are data holders (e.g., String and List) that are easy to instantiate with the desired value. Thus no need for mocking. D1 points out that some native classes cannot even be mocked as they can be final (e.g., String). D2 discussed the question from a different perspective. According to him, developers can trust the provided libraries, even though they are “external,” thus, there is no need for mocking. Both D1 and D2 made an exception for the Java I/O library: According to them, dealing with files can also be complex, and thus, they prefer to mock. D3, on the other hand, affirms that in their software, they commonly do not mock I/O as they favor integration testing. These findings match our data from RQ1, where we see that **Native Java Libraries** are almost never mocked. Respondents also had a similar perception: 82% of them affirm to never or almost never mock such dependencies. **Database, web services, and external dependencies are slow, complex to set up, and are good candidates to be mocked.** According to the interviewees, that is why mocks should be applied in such dependencies. D2 said: “Our database integration tests take 40 minutes to execute, it is too much”. These reasons also matches with technical literature [28], [23]. All participants have a similar opinion when it comes to other kinds of external dependencies/libraries, such as CDI or a serialization library: When the focus of the testing is the integration itself, they do not mock. Otherwise, they mock. D2 said: “When using CDI [Java’s Contexts and Dependency Injection API], it is really hard to create a concrete [CDI] event: in this case we usually prefer to mock it”. Two interviewees (D1 and D2) affirmed that libraries commonly have extensive test suites, thus developers do not need to “re-test”. D3 had a different opinion: Developers should re-test the library as they cannot always be trusted. In Figure 4, we observe that respondents always or almost always mock **Web services (~82%), External dependencies (~79%) and Databases (~71%).** This result confirm the previous discovery that when developers do not want to test the integration itself, they prefer to mock these dependencies. **RQ2.** The architectural role of the class is not the only factor developers take into account when mocking. Respondents report to mock when to use the concrete implementation would be not simple, e.g., the class would be too slow or complex to set up. **RQ3.** Which are the main challenges experienced with testing using mocks? We summarize the main challenges that appeared in the interviews and in the answers of our question about challenges in the survey (which we received 61 answers). Categories below represent the main themes that emerged during card sorting. **Dealing with coupling.** Mocking practices deal with different coupling issues. On one hand, the usage of mocks in test increases the coupling between the test and the production code. On the other hand, the coupling among production classes themselves can also be challenging for mocking. According to a participant, “if code has not been written with proper decoupling and dependency isolation, then mocking is difficult (if not impossible).” This matches with another participant’s opinions who mentions to not have challenges anymore, by having “learned how to separate concepts.” **Getting started with mocks.** Mocks can still be a new concept for many developers. Hence, its usage may require experienced developers to teach junior developers (which, according to another participant, usually tend to mock too much). In particular, a participant said that mock objects are currently a new concept for him/her, and thus, s/he is having some trouble understanding it. **Mocking in legacy systems.** Legacy systems can pose some challenges for users of mocks. According to a respondent, testing a single unit in such systems may require too much mocking (“to mock almost the entire system”). Another participant even mentions the need of using PowerMock [9] (a framework that enables Java developers to mock certain classes that might be not possible without bytecode manipulation, e.g., final classes and static methods) in cases where the class under test is not designed for testability. On the other hand, mocking may be the only way to perform unit testing in such systems. According to a participant: “in legacy systems, where the architecture is not well-decoupled, mocking is the only way to perform some testing.” **Non-testable/Hard-to-test classes.** Some technical details may impede the usage of mock objects. Besides the lack of design by testability, participants provide different examples of implementation details that can interfere with mocking. Respondents mentioned the use of static methods in Java (which are not mockable by default), file uploads in PHP, interfaces in dynamic languages, and the LINQ language feature in C#. **The relationship between mocks and good quality code.** Mocks may reduce test readability and be difficult to maintain. Survey respondents state that the excessive use of mocks is an indicative of poorly engineered code. Surprisingly during the interviews, D1, D2 and D3 mentioned the same example where using mocks can hide a deeper problem in the system’s design: “when you have to test class A, and you notice that it has 10/15 dependencies, you can mock it. However, you are hiding a problem: a class with 15 dependencies is probably a smell in the code.” In this scenario they find it much easier to mock the dependency as it is highly coupled and complex. However, they say this is a symptom of a badly designed class. D3 added: “good [production] code ease the process of testing. If the [production] code structure is well defined, we should use less mocks”. Interviewee D3 also said “I always try to use as less mocks as possible, since in my opinion they hide the real problem. Furthermore, I do not remember a single case in which I found a bug using mocks”. A survey respondent also shares the point that the use of mocks does not guarantee that your code will behave as expected in production: “You are always guessing that what you mock will work (and keep working) that way when using the real objects.” **Unstable dependencies.** A problem when using mocks is maintaining the behavior of the mock compatible with the behavior of original class, especially when the class is poorly designed or highly coupled. As the production class tends to change often, the mock object becomes unstable and, as a consequence, more prone to change. **RQs.** The use of mocks poses several challenges. Among all, a major problem is maintaining the behavior of the mock compatible with the original class. Furthermore, mocks may hide important design problems. Finally, while mocking may be the only way to test legacy systems, using them in such systems is not a straightforward task. V. **Discussion** In this section we discuss the main findings and their implications for both practitioners and future research. We also present the results of a debate about our findings with a main developer from Mockito. Finally, we provide an initial discussion on quantitatively mining mocking practices. **A. Empirical evidence on mocking practices** Mocking is a popular topic among software developers. Due to its importance, different authors have been writing technical literature on mock objects (e.g., [19], [31], [18], [34], [24], ![Fig. 3: Reasons to use mock objects (N = 105)](image1) ![Fig. 4: Frequency of mocking objects per category (N = 105)](image2) First, we provide concrete evidence on which of the existing practices in technical literature developers actually apply. For example, Meszaros [31] suggests that components that make testing difficult are candidates to be mocked. Our research confirms it by showing that developers also believe these dependencies should be mocked (RQ2) and that, in practice, developers do mock them (RQ1). Second, by providing a deeper investigation on how and why developers use mock objects. As a side effect, we also notice how the use of mock objects can drive the developer’s testing strategy. For instance, mocking an interface rather than using one concrete implementation makes the test to become “independent of a specific implementation”, as the test exercises the abstract behavior that is offered by the interface. Without the usage of a mock, developers would have to choose one out of the many possible implementations of the interface, making the test more coupled to the specific implementation. The use of mock objects can also drive developers towards a better design: Our findings show that a class that requires too much mocking could have been better designed to avoid that. Interestingly, the idea of using the feedback of the test code to improve the quality of production code is popular among TDD practitioners [14]. Third, by providing a list of challenges that can be tackled by researchers, practitioners, and tool makers. Most challenges faced by developers are purely technical, such as applying mocks in legacy systems and in poorly-designed classes, or even dealing with unstable production classes. Interestingly, none of the participants complained about the framework itself (e.g., missing features or bugs). B. Discussing with a developer from Mockito To get an even deeper understanding of our results and challenge our conclusions, we interviewed a developer from Mockito, showing him the findings and discussing the challenges. We refer to him as D4. D4 agreed on the findings regarding what developers should mock: According to him, databases and external dependencies should be mocked when developers do not test the integration itself, while Java libraries and data holders classes should never be mocked instead. Furthermore, D4 also approved what we discovered regarding mocking practices. He affirmed that a good practice is to mock interfaces instead of real classes and that developers should not mock the unit under test. When we argued whether Mockito could provide a feature to ease the mocking process of any of the analyzed categories (Figure 2), he stated: “If someone tells us that s/he is spending 100 boiler-plate lines of code to mock a dependency, we can provide a better way to do it. [...] But for now, I can not see how to provide specific features for databases and web services, as Mockito only sees the interface of the class, and not its internal behavior.” After, we focused on the challenges, as we conjecture that it is the most important and useful part for practitioners and future research and that his experience can shed a light on them. D4 agreed with all the challenges specified by our respondents. When discussing how Mockito could help developers with all the coupling challenges (unstable dependencies, highly coupled classes), he affirmed that the tool itself can not help and that the issue should be fixed in the production class: “When a developer has to mock a lot of dependencies just to test a single unit, he can do it! However, it is a big red flag that the unit under test is not well designed.”. This reinforces the relationship between the excessive use of mocks and code quality. When we discussed with him about a possible support for legacy systems in Mockito, D4 said Mockito developers have a philosophical debate internally: They want to keep a clear line of what this framework should and should not do. Non supported features such as the possibility of mocking a static method would enable developers to test their legacy code more easily. However, he stated: “I think the problem is not adding this feature to Mockito, probably it will require just a week of work, the problem is: should we really do it? If we do it, we allow developers to write bad code.” He also said that final classes can be mocked in Mockito 2.0; interestingly, the feature was not motivated by a willingness to ease the testing of legacy systems, but by developers using Kotlin language [4], in which every class is final by default. To face the challenge of getting started with mocks, D4 mentioned that Mockito documentation is already extensive and provides several examples on how to better use the framework. However, according to him, knowing what should be mocked and what should not be mocked comes with experience. C. Quantitatively mining mocking practices Our study sheds lights on some of the most used practices of mocking objects for testing and their reasons. Work can be done to check and generalize some of the answers given by developers by means of software data mining. This would have the advantage of a more objective view and quick generalizability to other systems. We take a first step into this direction by conducting an initial analysis to test the water and see whether some of our qualitative findings can be confirmed/denied by means of software data mining. In the following paragraphs, we discuss the results of our initial analysis and we provide possible alternatives to mine this information from code repositories. The unit under test is never mocked. To confirm this assertion, we automatically analyzed all test classes. For each test unit, we verified whether the unit under test (e.g. class A in the test unit ATest) has been mocked or not. Results show that over ~38,000 analyzed dependencies the unit under test is never mocked in any of the projects. Unless it is the unit under test, database dependencies are always mocked. To confirm this assumption, for each database dependency (information retrieved from our previous manual analysis in RQ1) outside its own test, we counted the number of times in which the dependency was not mocked. In case of Alura, we found that 90% of database dependencies are mocked when not in their specific test unit. When extending this result to all the projects, we obtain an average of 81%. Complex and coupled classes should be mocked. We take into account two metrics: CBO (Coupling between objects) and McCabe’s complexity [30]. We choose these metrics since they have been widely discussed during the interviews. Furthermore, as pointed out during the surveys, developers mock when classes are very coupled or difficult to set up. With the metrics value for each production class in the four systems, we compare the values from classes that are mocked with the values from classes that are not mocked. In general, as a class can be mocked and not mocked multiple times, we apply a simple heuristic to decide in which category it should belong: If the class has been mocked more than 50% of the times, we put it in the ‘mocked’ category, and vice-versa (e.g., if a class has been mocked 5 times and not mocked 3 times, it will be categorized as ‘mocked’). To compare the two sets, we use the Wilcoxon rank sum test [42] (with confidence level of 95%) and Cliff’s delta [22] to measure the effect size. We choose Wilcoxon since it is a non-parametric test (does not have any assumption on the underlying data distribution). As a result, we see that both mocked and non mocked classes are similar in terms of coupling: The mean coupling of mocked classes is 5.89 with a maximum of 51, while the mean coupling of non mocked classes is even slightly higher (7.131) with a maximum of 187. However, from the Wilcoxon rank sum test and the effect size, we observe that the overall difference is negligible (Wilcoxon p-value<0.001, Cliff’s Delta=−0.121). Same happens for the complexity metrics: The mean complexity of mocked classes is 10.58 with a maximum of 89,000, while the mean complexity of non mocked classes is 16.42 (max 420). Difference is also negligible (Wilcoxon p-value=5.945e−07, Cliff’s delta=−0.166). We conjecture that the chosen code metrics are not enough to predict whether a class should be mocked. Future research needs to be conducted to understand how code metrics are related to mocking decisions. There are many other discoveries that can be verified using quantitative studies (i.e. are slow tests mocked more often? how faster are tests that use mocks?). Here we simply proposed an initial analysis to show its feasibility. Further research can be designed and carried out to devise approaches to quantitatively evaluate mocking practices. VI. RELATED WORK Despite the widespread usage of mocks, very few studies analyzed current mocking practices. Mostafa et al. [32] conducted an empirical study on more than 5,000 open source software projects from GitHub, analyzing how many projects are using a mocking framework and which Java APIs are the most mocked ones. The result of this study shows that 23% of the projects are using at least one mocking framework and that Mockito is the most widely used (70%). Marri et al. [29] investigated the benefits of using mock objects. The study identifies the following two benefits: 1) mock objects enable unit testing of the code that interacts with external APIs related to the environment such as a file system, and 2) enable the generation of high-covering unit tests. Taneja et al. [40] stated that automatic techniques to generate tests face two significant challenges when applied to database applications: (1) they assume that the database that the application under test interacts with is accessible, and (2) they usually cannot create necessary database states as a part of the generated tests. For this reasons they proposed an “Automated Test Generation” for Database Applications using mock objects, demonstrating that with this technique they could achieve better test coverage. Karlesky et al. [25] applied Test-Driven Development and Continuous Integration using mock objects to embedded softwares, obtaining an order of magnitude or more reduction in software flaws, predictable progress, and measurable velocity for data-driven project management. Kim et al. [26] stated that unit testing within the embedded systems industry poses several unique challenges: software is often developed on a different machine than it will run on and it is tightly coupled with the target hardware. This study shows how unit testing techniques and mocking frameworks can facilitate the design process, increase code coverage and the protection against regression defects. Tim et al. [28] stated that using Mock Objects is the only way to unit test domain code that depends on state that is difficult or impossible to reproduce. They show that the usage of mocks encourages better-structured tests and reduces the cost of writing stub code, with a common format for unit tests that is easy to learn and understand. VII. CONCLUSION Mocking is a common testing practice among software developers. However, there is little empirical evidence on how developers actually apply the technique in their software systems. We investigated how and why developers currently use mock objects. To that end, we studied three OSS projects and one industrial system, interviewed three of their developers, surveyed 105 professionals, and discussed the findings with a main developer from the leading Java mocking framework. Our results show that developers tend to mock dependencies that make testing difficult, i.e., classes that are hard to set up or that depend on external resources. In contrast, developers do not often mock classes that they can fully control. Interestingly, a class being slow is not an important factor for developers when mocking. As for challenges, developers affirm that challenges when mocking are mostly technical, such as dealing with unstable dependencies, the coupling between the mock and the production code, legacy systems, and hard-to-test classes are the most important ones. Our future agenda includes understanding the relationship between code quality metrics and the use of mocking as well as the role of software evolution in developers’ mocking practices.
{"Source-Url": "https://pure.tudelft.nl/portal/files/13757227/PID4728641.pdf", "len_cl100k_base": 11253, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40257, "total-output-tokens": 12151, "length": "2e13", "weborganizer": {"__label__adult": 0.0003719329833984375, "__label__art_design": 0.0002294778823852539, "__label__crime_law": 0.0003786087036132813, "__label__education_jobs": 0.000957965850830078, "__label__entertainment": 3.999471664428711e-05, "__label__fashion_beauty": 0.00015747547149658203, "__label__finance_business": 0.00013840198516845703, "__label__food_dining": 0.0002846717834472656, "__label__games": 0.0004897117614746094, "__label__hardware": 0.00040340423583984375, "__label__health": 0.0003306865692138672, "__label__history": 0.00014150142669677734, "__label__home_hobbies": 6.598234176635742e-05, "__label__industrial": 0.0002200603485107422, "__label__literature": 0.00019872188568115232, "__label__politics": 0.0002808570861816406, "__label__religion": 0.0003764629364013672, "__label__science_tech": 0.0017910003662109375, "__label__social_life": 0.00010347366333007812, "__label__software": 0.003437042236328125, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.0002970695495605469, "__label__transportation": 0.0003190040588378906, "__label__travel": 0.00017178058624267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54918, 0.02093]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54918, 0.39804]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54918, 0.93685]], "google_gemma-3-12b-it_contains_pii": [[0, 1262, false], [1262, 6635, null], [6635, 12022, null], [12022, 17174, null], [17174, 20561, null], [20561, 26689, null], [26689, 31366, null], [31366, 37705, null], [37705, 42599, null], [42599, 48646, null], [48646, 54918, null], [54918, 54918, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1262, true], [1262, 6635, null], [6635, 12022, null], [12022, 17174, null], [17174, 20561, null], [20561, 26689, null], [26689, 31366, null], [31366, 37705, null], [37705, 42599, null], [42599, 48646, null], [48646, 54918, null], [54918, 54918, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54918, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54918, null]], "pdf_page_numbers": [[0, 1262, 1], [1262, 6635, 2], [6635, 12022, 3], [12022, 17174, 4], [17174, 20561, 5], [20561, 26689, 6], [26689, 31366, 7], [31366, 37705, 8], [37705, 42599, 9], [42599, 48646, 10], [48646, 54918, 11], [54918, 54918, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54918, 0.02294]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
f16db220199e15868b6694a2f5a9e5d9f264643d
Chapter 4 Z and Set-based specification 4.1 Overview The specification language Z (pronounced “Zed”) has goals similar to those of Larch, but based on the model of set theory. In Z, everything is a set. Sets, with the usual operations of union, intersection, and membership, familiar from high-school math and introductory Discrete Math courses, are pushed to the limit and augmented by adding new notations, so that realistic systems can be specified. In Z very brief specifications convey large amounts of information. Because it is rich in notation and unfamiliar symbols, the language is difficult to present both briefly and understandably. To the uninitiated, a Z specification can appear daunting. Some of the design goals for Z resemble those of Larch, in particular, the possibility of building up a complex specification in stages. However, some of those stages differ. In Z it is natural to separate the declarations and creation of the state from any invariant properties of states, and separate those from any dynamic operations. In Larch, it will be recalled, it was natural to add functionality to operations in stages, and to define their interrelationships in different stages. Z has a highly developed calculus for combining objects of the language, and also for refining an abstract specification into one that is closer to an implementation. On the other hand, there is no concept of an interface language, and the connection between a Z specification and a code-level implementation is not treated within the Z methodology. Particular effort has been invested by the developers of Z, at the Oxford Programming Research Group, in applying the language to complex examples. Some of these have been specified in cooperation with industry, and demonstrate an impressive variety of application areas. Several books and tutorials on the language have been prepared, as part of the extensive presentation effort of the developers. Z also has its own Use-net group, with over 20,000 regular readers worldwide. Somewhat less emphasis has been put on tools that support Z. There are macros that support writing in Z style within Latex, and simple tools that enable type checking, syntax checks, and cross references. However, there is no serious theorem proving effort to semiautomatically justify specifications or refinements. Thus the implications of a specification need to be analyzed by hand. Below major parts of the notation are presented briefly. The luxury of a gradual exposure to the notation in a book-length presentation cannot be employed here, so most of the motivation and use of the symbols will be postponed to several larger examples later in the chapter. ### 4.2 Schemas and some notation for sets Z has one notation for declaring all objects: types, variables, functions, and operations are all instances of *schemas*. A schema syntactically consists of a variable declaration part and a predicate. A schema $S$ may be written $$S = [declarations \mid predicate]$$ Alternatively, a two dimensional version may be used, for clarity and emphasis. This version is written as ``` S | declarations | predicate ``` Whenever a schema $S$ has been defined, the notation $w : S$ can be used to indicate an instance of $S$ to be called $w$, roughly that $w$ is an object of type $S$, and this is the form of declarations in the first part of a schema. If $x$ appears in the declaration part of $S$, and $w$ is an instance of $S$, then the $x$-th component of $w$ is denoted either as $w_x$ or $x(w)$. For the predicate part, a restriction on the variables from the declaration is given in a variant of first-order logic. Standard logical connectives such as $\land$, $\lor$, or implication ($\Rightarrow$) can be used. Existential quantification is denoted by $$\exists x : T \bullet P$$ This is the assertion “there exists an $x$ of type $T$ that satisfies $P$” Within a schema, a standard set notation is used. Thus, $$\{ x : T \mid P \}$$ denotes “the set that includes all $x$ of type $T$ that satisfy $P$.” One convenient extension of this notation allows defining sets of complex objects. When $a \bullet$ appears to the right within a set definition, it will be followed by an expression defining the elements composing the set. Thus, $$\{ x : T \mid P \bullet r \}$$ means “the set of possible values of the expression $r$ such that given the declaration $x : T$, $P$ holds.” For example, $$\{ x : Integer \mid 1 \leq x \leq 4 \bullet x^2 \}$$ is the set $\{1, 8, 27, 64\}$. A few other common set notations include $\mathbb{P}S$, the power set of $S$, namely, the set of all subsets of $S$. If $S = \{a, b, c\}$, then the power set of $S$ is $\{\{\}, \{a\}, \{b\}, \{c\}, \{a, b\}, \{a, c\}, \{b, c\}, \{a, b, c\}\}$. A separate notation, $\mathbb{F}S$ denotes the finite subsets of $S$, even if $S$ itself is infinite. $\#S$ denotes the number of elements in a finite set $S$, and is undefined for infinite sets. 4.3 Relations and functions Before a specification can be made using even a fragment of Z, the notation for relations and functions must be introduced. As noted in every introductory course in set theory, these can be viewed as special cases of sets. A relation is nothing more than a set of ordered tuples. In all of the continuation, we shall consider only binary relations, so that a relation $R$ can be described by a set of ordered pairs. If $R$ is a relation between elements of a set $X$ and elements of a set $Y$, we can use the Cartesian product notation $X \times Y$ to denote all possible pairs \{$(x, y)$\}, and thus $R : \mathcal{P}(X \times Y)$, and the pair $(x, y)$ is in the set $R$ if $x$ and $y$ satisfy the relation. Special notation used for relations includes $xRy$ instead of $(x, y) \in R$, $R : X \leftrightarrow Y$ instead of $R : \mathcal{P}(X \times Y)$, $x \mapsto y$ (read as "$x$ maps to $y$") instead of $(x, y)$. If we consider the $fs$ (father-son) relation for Biblical personalities, then $(\text{Abraham, Isaac})$, Isaac $\mapsto$ Jacob, Jacob $fs$ Joseph are all ways of describing part of the relation. We can describe the composition of relations directly in terms of their set representation. If a relation $R : X \leftrightarrow Y$ and a relation $S : Y \leftrightarrow Z$ are composed, the result is a relation $T : X \leftrightarrow Z$ that is defined only when the second component of $R$ coincides with the first component of $S$. In notation already presented, it is the set $$\{x : X, y : Y, z : Z \mid x \mapsto y \in R \land y \mapsto z \in S \cdot x \mapsto z\}$$ This operation on relations is denoted by $\circ$. If a relation brothers had been defined previously, then the relation uncle-nephew contains the composition of brothers and $fs$, written brothers $\circ fs$. When a relation $R$ is composed with itself, this is denoted as $R^2$. The grandfather-grandson relation clearly contains $fs^2$, but in general could have additional entries (through a daughter who is also a mother). If we consider a relation from a ‘source’ $X$ to a ‘target’ $Y$ (so $R : X \leftrightarrow Y$), then define $$\text{dom } R = \{x : X \mid \exists y : Y \cdot (x, y) \in R\}$$ That is, the domain of $R$ is the set of those elements of $X$ actually related by $R$ to at least one element of $Y$. Similarly, define $$\text{ran } R = \{ y : Y \mid \exists x : X \bullet (x, y) \in R \}$$ The range of $R$ is the set of elements of $Y$ related by $R$ to some element of $X$. Note that unless a relation is symmetric (as for brothers), the order of the elements in the pairs is important. For any relation $R$, the relation $R^\sim$ is obtained by reversing the order of every pair in $R$. A less common operator that restricts a relation $R$ to a set $S$ is defined by $$S \triangleleft R = \{ x : X : y : Y \mid x \in S \land x \mapsto y \in R \bullet x \mapsto y \}$$ This is the part of $R$ that ‘starts’ in $S$, more precisely, the pairs from $R$ with a first component in $S \cap \text{dom } R$. If $R = \{(0, 1), (1, 1), (1, 0), (0, 2)\}$, then $\{0\} \triangleleft R$ is $\{(0,1), (0,2)\}$. The related operator $\triangleleft$ restricts $R$ to those pairs with domain elements not in the set $S$. Similarly, $R \triangleright S$ is the part of $R$ with range elements in $S$. A partial function is simply a relation for which each domain element relates to exactly one range element. Z provides a rich variety of arrow symbols to express special types of functions, both in declarations and in the predicates of schemas. In Table 4.1, these arrows and their meaning are summarized. The child-mother relation would most naturally be a partial function $cm : A \rightarrow B$ from a group of people $A$ to a group of people $B$, assuming we mean the biological mother (otherwise, because of step-mothers, it is a relation but not necessarily a function). It is partial because the mother of some member of $A$ might not be in $B$. If the ages of all members of a group $G$ are given, this is a total function from $G$ to the natural numbers (written $\text{age } : G \rightarrow \mathbb{N}$), since every member of the group is in the domain of the function. A function (partial or total) is one-to-one (written $\mapsto$ or $\rightarrow$, respectively) if each range element is related to at most one domain element. Only if a relation $R$ is a one-to-one function is the inverse relation $R^\sim$ also a function. Assuming the situation at a given time is considered, the wife-husband relation is a one-to-one (partial) function between A \rightarrow B \quad \text{partial function} A \rightarrow B \quad \text{total function} a \quad \text{partial one-to-one function} A \leftrightarrow B \quad \text{total one-to-one function} A \Rightarrow B \quad \text{partial onto function} A \Rightarrow B \quad \text{total onto function} Table 4.1: Function symbols groups of people. To complete the picture, a function is \textit{onto} if it covers the entire target as its range. Thus, if \( f : X \rightarrow Y \) and \( \text{ran} f = Y \), the function \( f \) is onto \( Y \), written \( f : X \Rightarrow Y \), and analogously for total onto functions. For any finite group of people \( G \), the \textsl{child-parent} function from \( G \) to \( G \) can \textit{not} be onto \( G \) (there must be a person in \( G \) who is not the parent of anyone else in \( G \), or else there would have to be a cycle with someone an ancestor of himself). Since a function is simply a special case of a relation, which in turn is a special kind of set, all set or relation operations can be applied to functions. In general, however, the result will not be a function. For example \[{(0, 5), (2, 6), (4, 5)} \cup {(1, 5), (2, 4), (3, 4)}\] is not a function, even though both arguments are, because the domain element 2 is mapped to both 6 and 4. Some special notation is introduced for functions, to express common operations more concisely. First, the common application of a function \( f \) to an argument \( a \), written \( f a \), is a concise way to express the range element of \( f \) that corresponds to the domain element \( a \). The more common form that puts the argument in parentheses is only used in Z when necessary to avoid ambiguity. A common operation on functions that arises in many specifications is known as “functional override”, written as \( f \oplus g \). Again viewing the functions as sets of pairs, this is formally defined as \[f \oplus g = ((\text{dom } g) \triangle f) \cup g.\] This is simply $g$ combined with the part of $f$ not in conflict with $g$. That is, the pairs of $f$ that would cause the union to not be a function are not included in the result. Note that this operation is not symmetric and that the second function argument ‘overrides’ the conflicting part of the first function. Unsurprisingly, sequences of elements in $\mathbb{Z}$ are also ultimately viewed as particular kinds of sets. More specifically, a sequence is a partial function from the natural numbers to the elements of the sequence, where the range is exactly $1 \ldots n$ if there are $n$ elements in the sequence (and the range is called the index of the sequence). Thus, a sequence of letters $AXBYAB$ would be represented by the set $$\{ 1 \mapsto A, 2 \mapsto X, 3 \mapsto B, 4 \mapsto Y, 5 \mapsto A, 6 \mapsto B \}$$ The set of all possible sequences with elements from the set $S$ is defined by $$\text{seq}(S) \doteq \{ f : \mathbb{N} \to S \mid \text{dom } f = 1 \ldots \#f \}$$ Clearly, any operations on sets, relations, or functions can also be performed on sequences, but the result will not necessarily be a sequence. In addition, there are a few special operations that are usual for sequences, of which we shall need bracketing and concatenation. Brackets around an element or list of elements define a sequence with those elements in their order of appearance, as in $$\langle x, y \rangle \doteq \{ 1 \mapsto x, 2 \mapsto y \}$$ The usual concatenation operator between two sequences is denoted by $$s \circ t \doteq s \cup \{ i : \mathbb{N} \mid i \in 1 \ldots \#t \cdot (i + \#s) \mapsto (ti) \}$$ where the elements of the sequence $t$ are after the elements of the sequence $s$. Note that the definition above is a union of the pairs that make up the sequence $s$, along with new pairs derived from those in the sequence $t$ but with each index value increased by the number of elements in $s$ ($\#s$). Now enough notation has been introduced to allow a first reasonable specification. In Figure 4.1, a schema describing a Library is presented. It assumes the existence of three sets called Copy, Book, and Reader describing, respectively, a physical copy of a book, (e.g., a running unique identifying number), abstract information about a book (title, author, ...) and information about a reader (name, address, library card number, ...) The predicate part of this schema can be viewed as an invariant of the library. The variables define a state, and the predicate determines what must be true of this state. What is clearly missing is a description of the operations in a library. For this purpose, Z provides a special facility known as variable augmentation, to be used only in schemas and proofs relating to operations. In this context, a variable declaration with an ‘unadorned’ variable represents the state before the operation, while a version with a ’ (called a primed version) relates to the variable after the operation (just as in the Larch interface languages). In addition, any variable with a question mark (?) after it is intended to be an external input for the operation, and a variable with an exclamation point (!) is an output for the operation. Note the distinction between the (internal) primed state after the operation, and (external) output. This distinction is often useful, but occasionally burdensome, when it is not clear whether the operation will be embedded within another more complex one, so that the apparently external result becomes internal. Now a schema describing an operation, such as borrowing a book, can be defined. All variables describing the library need to be defined both in the state before the operation, and in a primed version, for the state afterwards. Clearly, all invariants of the library are required to hold both among the regular variables, and among the primed versions (i.e., in the states before and after the operation, respectively). Input for such an operation would be a particular copy of a book, say $c?$, and a variable denoting a reader, say, $r?$. Conditions that should hold in order for the operation to occur might include that the copy of the book is on the shelves ($c? \in \text{shelved}$), that the reader is registered at the library ($r? \in \text{readers}$), and that the reader has taken out fewer than the maximum allowed number of books ($\#(\text{issued} \ni \{r?\}) < \text{maxloans}$). The desired effect of the operation can be specified by the requirement $\text{issued}' = \text{issued} \oplus \{c? \mapsto r?\}$, so that the copy will be associated with the borrowing reader in the new version of the function $\text{issued}$. Rather than write out all the declarations, and both the unprimed and the primed versions of the invariants, it is possible to apply a prime to a schema. This is a shorthand for a version with a prime added to every declared variable, both in the declarations and in the predicate part. Moreover, a schema may be included within another schema—which means to include all of its declarations and its predicate. The borrow operation then becomes \[ \begin{array}{l} \text{Borrow} \\ \text{Library} \\ \text{Library}' \\ c? : \text{Copy} \\ r? : \text{Reader} \\ \end{array} \] \[ c? \in \text{shelved} ; r? \in \text{readers} ; \#(\text{issued} \ni \{r?\}) < \text{maxloans} \\ \text{issued}' = \text{issued} \oplus \{c? \mapsto r?\} \] As additional notation intended to save effort, we have $\Delta A$ as a shorthand for $A$ and $A'$ (so that we could have written $\Delta \text{Library}$ instead of the first two lines of the schema above). CHAPTER 4. Z AND SET-BASED SPECIFICATION The notation $\Xi A$ is equivalent to $$[\Delta A \mid \text{var}A = \text{var}A'].$$ The symbol $\Xi$ represents the unprimed and primed versions of its argument, plus the assertion that the state is not changed by the operation containing such a notation. This is used for identifying the operations that examine the state, analogously to Larch. A schema that could be used in continuing the Library specification is $$\begin{align*} books \leftarrow \text{out} \\ \Xi \text{Library} \\ r? : \text{Reader} \\ books! : \text{F Copy} \\ r? \in \text{readers} \\ books! = \text{dom}(\text{issued} \uparrow \{r?\}) \end{align*}$$ 4.4 The schema calculus The combinations of schemas just seen in specifying the Library operations are simple examples of what is known as the schema calculus. These are rules for deriving one schema from another, and particularly for combining schemas. *Inclusion* is the most common method of combining schemas. When the name of one schema is written in the declaration part of another, the declarations are to be merged, and a conjunction is to be taken between the predicate of the included schema, and that given explicitly in the including one. In the merge, variables that appear in both (and have the same type declaration) appear once, as do declarations that appear in either. If there is a conflict in the declarations of the same variable name, the inclusion is illegal. Thus, given a schema $$\begin{align*} S \\ x : \mathbb{N} \\ y : \mathbb{N} \\ x \leq y \end{align*}$$ we may write \[ \begin{array}{c} T \\ \underline{S} \\ \underline{z : N} \\ \underline{z \leq x} \end{array} \] This is entirely equivalent to the expanded version, namely: \[ \begin{array}{c} T \\ x : N \\ y : N \\ z : N \\ \underline{x \leq y \land z \leq x} \end{array} \] Inclusion is used to build up schemas in stages, to encourage modularity, as well as to conveniently separate the static and the dynamic parts of a system, as seen in the static \textit{Library} schema, and its use in describing the \textit{borrow} operation. Logical operators among schemas are also possible. For example, \( S \land T \) is a symmetric version of inclusion, and has the same meaning. \( S \lor T \) merges the declarations as above, but has a disjunction between the predicates \( \neg S \) is like \( S \), but with a predicate that is the negation of \( S \)’s A few notations to derive new schemas from old are a little less standard. Among these we have: \( S[\text{new}/\text{old}] \) defines substitution. The result is a schema like \( S \), but with the name \textit{new} in place of free occurrences of the name \textit{old}. \( S \setminus \{v_1, ..., v_n\} \) is called hiding. A schema is defined that is like \( S \) except that the variables \( v_1 \) through \( v_n \) are removed from the declaration, and the phrase \( \exists v_1...v_n \) precedes the predicate of \( S \). After a hiding transformation, the hidden names no longer refer to variables from the state of the system being specified. The assertion is now merely that some values exist that will make the predicate true, with no connection to any role these names might have in defining the system. Thus for the schema \( T \) defined above, \( T \setminus \{ x \} \) is the schema \[ \begin{align*} z & : \mathbb{N} \\ y & : \mathbb{N} \end{align*} \] \( \exists x : \mathbb{N} \cdot x \leq y \land z \leq x \) The assertion now means that there is some natural number between \( z \) and \( y \), and is equivalent to \( z \leq y \), since \( y \) and \( z \) are already natural numbers. For schemas that describe operations, two more derived schemas can be defined, using the notation already introduced. For a schema \( S \) describing an operation, \( \text{pre } S \) is another schema, whose predicate is the precondition for the operation specified by \( S \). It is formally defined as \( S \) hiding all declared variables with a \( ! \) or a \( ' \). Recall that these are the variables intended to represent output, and the state after the operation, respectively. By removing them from the declaration, and asserting that there exist values with those names before the predicate of \( S \), only the conditions that relate to the state before the operation are left in \( \text{pre } S \). For the \textit{Borrow} operation defined previously, \( \text{pre } \textit{Borrow} \) is \[ \begin{align*} \text{Library} \\ c? : \text{Copy} \\ r? : \text{Reader} \end{align*} \] \[ \exists \text{issued'} \cdot c? \in \text{shelved} \land r? \in \text{readers} \land \\ \#(\text{issued} \triangleright \{r?\}) < \text{maxloans} \land \\ \text{issued'} = \text{issued} \oplus \{ c? \mapsto r? \} \] Note that the primed version of \textit{Library} has been removed, and that again the name \( \text{issued'} \) is quantified in the predicate, and has nothing to do with the part of the system state \textit{issued}. In fact, the existential quantifier and the second part of the predicate merely assert that \textit{issued} $\oplus \{ e? \rightarrow r? \}$ is a function. This is trivially true from the definition of $\oplus$ and the fact that the arguments are functions, and thus can be removed, leaving the much simpler \begin{verbatim} | Library | c? : Copy | r? : Reader | c? $\in$ shelved ; r? $\in$ readers ; #(issued $\triangleright \{r?\}) < maxloans | \end{verbatim} This is a special case of applying the reduction $$\exists x : S \bullet (x = T \land P) \Leftrightarrow T \in S \land P[T/x]$$ That is, in the assertion $P$, we may replace the existentially quantified variable $x$ by the expression $T$ to which it is equal, along with the type information, if necessary, and eliminate the quantification. The final combination of schemas we shall need is known as \textit{schema composition}. Recall that for relations, their composition $R \circ S$ is simply the transitive pairs $(x, z)$ for which there is a $y$ satisfying $(x, y) \in R$ and $(y, z) \in S$. The same idea applies to composition of schemas that represent operations, except that the primed version of variables from the first component will be associated with the unprimed versions in the second component. For example, given schemas $A$ and $B$, each with declarations for $x$ and $x'$, a new name can be used to identify the $x'$ of $A$ with the unprimed $x$ of $B$. The composition then is the conjunction of the schemas, hiding the variable just introduced. That is $$A \circ B \equiv (A[new/x'] \land B[new/x]) \setminus \{new\}$$ The composition then is left with the $x$ of $A$, and the $x'$ of $B$, where the former result of $A$ is connected to the former initial state of $B$ by the hidden variable $new$. As above, often the existential quantification can be simplified or eliminated. New names and hiding as above should be used for all primed variables declared in the first component of the composition which also are declared with unprimed versions in the second component. All other variables in both components are unchanged. Consider a simple schema defined by $F$ \[ s, s', i? : \mathbb{N} \\ \hline s' = i? + s \] and another defined by $T$ \[ s, s', o! : \mathbb{N} \\ \hline s' = 2 \times s \\ o! = s' \] Then the composition $F \circ T$ is \[ s, s', i?, o! : \mathbb{N} \\ \hline \exists \text{new} : \mathbb{N} \cdot \text{new} = i? + s \land s' = 2 \times \text{new} \land o! = s' \] Here the name $\text{new}$ is substituted for $s'$ of $F$, and $s$ of $T$. As previously, the predicate can be simplified to \[ s' = 2 \times (i? + s) \land o! = s' \] If we define a simple schema $Register$ by $Register$ \[ \begin{array}{c} \text{Library} \\ \text{Library'} \\ r? : \text{Reader} \\ \end{array} \] \[ \text{readers'} = \text{readers} \cup \{r?\} \] Then the composition $Register \circ Borrow$, after simplification, is This schema has the effect of registering a reader, and then checking out a book by that reader, without having to check that the new reader is already in $\text{readers}$. ### 4.5 Examples #### 4.5.1 A symbol table A symbol table can be considered as a way of associating strings of letters (possibly representing variables or labels in a high level programming language) with values (e.g., internal memory locations or labels). Thus we have $$ST = STR \rightarrow VAL$$ The standard symbol table functions of adding an entry (called here $\text{Enter}$) and looking up a value for a given string (Lookup) can then be defined by $$\text{Enter}$$ <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>$st' = st \oplus { s? \mapsto v? }$</td> <td></td> </tr> </tbody> </table> Note that the predicate of \textit{Lookup} includes both conditions on the input and the relation between the input and the output. Recall that the application of the function \( st \) to the argument \( s? \) is indicated simply by juxtaposition, without using parentheses. An operation to initialize a symbol table could be defined as \[ \text{Init} \equiv [\text{st'} : \text{ST} \mid \text{st'} = \{\}] \] As given, the operation \textit{Lookup} is not defined when the argument \( s? \) is not in the domain of the symbol table. The schema calculus is intended to encourage a modular treatment of issues such as error checking and treatment of exceptional inputs. In this example it would be reasonable to have a schema of an operation that only treats input strings never entered in the symbol table: Then a more robust lookup operation could be \[ \textit{Rlookup} \equiv \textit{Lookup} \lor \textit{Badquest} \] Similarly, the calculus can be used to add functionality or simply for debugging purposes by having a \textit{Log} schema. 4.5. EXAMPLES Then we might define \[ \text{Auglookup} \equiv (\text{Lookup} \land \text{Log}) \lor \text{Badquest} \] ### 4.5.2 A Stack A collection of schemas for the stack operations could use a sequence to represent the stack and then define \[ \text{push} \] \[ x? : E ; s, s' : \text{seq}(E) \] \[ s' = s \cup \{(\#s + 1) \mapsto x?\} \] That is, the \text{push} operation is represented by adding the new element to the end of the sequence representing the contents of the stack. Recall from the definitions of concatenation and bracketing, that this is equivalent to \( s \triangleleft (x?) \). \[ \text{pop} \] \[ s, s' : \text{seq}(E) \] \[ s' = \#s \triangleleft s \] The \text{pop} operation removes the last element from the sequence \( s \) by using an operation on relations: the set \( s \) without the pair whose index value is \( \#s \). \[ \text{top} \] \[ x! : E ; s : \text{seq}(E) \] \[ \exists s \] \[ x! = s (\#s) \] Note that here we choose to add, remove, and examine the stack elements from the end of the sequence. An equivalent specification could be written adding and removing elements from the beginning of the sequence. Even though the specification would be slightly more complicated to write, the 'efficiency' of a specification is irrelevant. Still, this need to commit to one end or the other seems less abstract than the Larch version: one or the other possibility has to be used, and this may 'prejudice' which kind of implementation is chosen. Changing this to a specification of a queue is a matter of adding to one end of the sequence and removing from the other. It is also simple to specify in this way more complex structures such as double-ended queues. ### 4.5.3 Elements of an assembler specification Another example of the view of sequences as special kinds of functions, which are also of course relations, can be seen in an assembly language program. We are given that the program is a sequence of commands, and that commands have a partial projection function $\textit{label}$ that returns the symbolic label of any command that has a label, and otherwise is undefined. If the input program is denoted by $\textit{inprog}$, we can define a specialized symbol table of labels differently than the general table seen previously where elements can be added one by one. Here we can define the contents of the table as a function of the entire program, viewed as a sequence: $$\textit{symtab} \triangleq (\textit{inprog} \circ \textit{label})^*$$ Since the program is a sequence of commands, it might have the form $$\{1 \mapsto (L_1 : \textit{load } x), 2 \mapsto (\textit{add } y), 3 \mapsto (L_2 : \textit{store } z), 4 \mapsto (\textit{goto } L_5)\ldots\}$$ The composition of $\textit{inprog}$ and $\textit{label}$ would give the function (no longer a sequence): $$\{1 \mapsto L_1, 3 \mapsto L_2, \ldots\}$$ and the inverse would give a relation from labels to sequence (line) numbers, as required for the labels in a symbol table. If the labels are unique, this also would be a function, so that $\textit{symtab} L_2$ returns 3. If there is also a projection function \( \text{refer} \) that returns the label that appears in the argument field of an assembly language command (e.g., the \( L5 \) in the fourth command above), then one part of the predicate in the schema \( \text{ASSEMBLER} \) could be: \[ \text{ran}(\text{inprog} \circ \text{refer}) \subseteq \text{dom symtab} \] This is part of the precondition of the assembler, and requires that all references to labels that appear in the input assembler program are actually labels of some statement. Assuming that the output machine language program is produced in the variable \( \text{outmach}! \), that the translation is syntactically line for line (so that the same line numbers apply), and that there is an \( \text{operand} \) projection function for the operands of machine language instructions, another part of the schema could be \[ (\text{inprog} \circ \text{refer} \circ \text{syntab}) \subseteq (\text{outmach}! \circ \text{operand}) \] The left side of the inclusion gives the pairs of line numbers of an instruction and a translation of a reference to a line number using the symbol table, while the right side also gives pairs of line numbers, from the machine language lines, and the corresponding operand field. This expresses the requirement from the argument field for all instructions that have abstract references. Along with added predicates that cover the other kinds of instructions, (e.g., those with constants in the argument field) this will be the specification of the assembler. ### 4.6 Refinement Most of the specification efforts undertaken in \( Z \) have involved analyzing informal requirements and using them as a basis to write a collection of schemas. However, there is also an associated theory of refinement for the notation. The idea is to develop a system by starting with high level \( Z \) schemes describing abstract operations, and then use the theory of refinement to replace those by schemas closer to an implementation. The theory provides precise criteria for checking that the refinement satisfies the key properties of the more abstract version. The first question to be considered is what exactly is expected of a refinement. An interesting distinction is made in this theory between data refinement and operation refinement. The former corresponds to the replacement of data structures from an upper, abstract level, by those from a lower, concrete one, and will be treated using mapping functions analogously to the approach seen in Larch. To understand the idea of operation refinement, we begin by defining relation refinement. For two operations $R_1$ and $R_2$, defined over the same domain and range, $X \leftrightarrow Y$, $R_1$ is refined by $R_2$ (written $R_1 \subseteq R_2$) if and only if: **(Applicability)** $\text{dom } R_1 \subseteq \text{dom } R_2$ (whenever $R_1$ can be applied, so can $R_2$) **(Correctness)** $((\text{dom } R_1) \times R_2) \subseteq R_1$ (when $R_1$ can be applied, but $R_2$ is instead, the result is in the relation $R_1$) The definition is reasonable when the relations are to be used to connect domain and range elements. The definition above means that $R_2$ can always be substituted for $R_1$ (since its domain is defined whenever $R_1$’s is), and will produce a subset of the results possible under $R_1$. Note that $R_2$ can have fewer possible range elements that correspond to a given domain element also in the domain of $R_1$, but it must have at least one. This becomes clearer when the same ideas are applied to operations on the same state. If $Op_1$ and $Op_2$ are operations defined by schemas, then $Op_1$ is refined by $Op_2$ (again, $Op_1 \subseteq Op_2$) iff **(Applicability)** $\text{pre } Op_1 \vdash \text{pre } Op_2$ (whenever $Op_1$ can be applied, so can $Op_2$) **(Correctness)** $\text{pre } Op_1 \land Op_2 \vdash Op_1$ (when $Op_1$ can be applied, but $Op_2$ is instead, the result could have been obtained by applying $Op_1$) The requirements are the same as for relations, but using the notation of logic and schemas instead of that for sets. There can be less nondeterminism (fewer possible results) in the refinement because it is more specific or concrete. However, a refinement cannot ‘refuse to implement a legal input of the more abstract level, and must produce some result. Consider a schema Takesome defined by \[ \begin{array}{c} \text{Takesome} \\ \hline x, x' : \mathbb{N} \\ 0 < x' < x \end{array} \] This schema is satisfied by any value of \( x' \) greater than zero and strictly less than \( x \). Its precondition would hide \( x' \) but assert that there must be some integer value between zero and \( x \), and thus requires that \( x \) is at least two. A possible operation refinement could be the schema Takeone defined by \[ \begin{array}{c} \text{Takeone} \\ \hline x, x' : \mathbb{N} \\ x' = 1 \end{array} \] Note that Takeone is defined for values of \( x \) for which Takesome is not (namely zero and one). However, when both are defined, the value of \( x' \) for Takeone (namely, one) is a possible value if Takesome had been applied instead. This may seem quite theoretical, but consider specifying a scheduler for jobs in an operating system, where many orderings are possible, but some basic responsiveness and fairness properties are required. In practice, a specific round-robin scheduler may be applied, which satisfies the required properties, but has fewer possible orderings of the jobs than indicated by the abstract requirements. It is not difficult to see that the notion of refinement seen here is appropriate for such multiprocess systems. Still, it is insufficient to only consider operation refinements. There also are data refinements where an abstract state representation is replaced by a more concrete version. For example, on the abstract level we might consider a set of elements, and on a more concrete level implement the set using a sequence or array. An abstract state representation could have the schema: A more concrete state representation might have a schema that uses a domain \( \text{STATUS} \) defined by \[ \text{STATUS} \doteq \{ \text{reg}, \text{ex} \} \] The schema then might be: \[ \begin{array}{l} \text{Concgroup} \\ \text{memseq} : \text{seq} \cdot \text{NAMES} \\ \text{stat} : \text{NAMES} \rightarrow \text{STATUS} \\ \#\{ y : \text{NAMES} \mid y \in \text{ran} \text{memseq} \wedge \text{stat} y = \text{ex} \} \leq \text{limit} \end{array} \] To connect the two states, a mapping function, traditionally from the concrete representation to the abstract one as in Larch, is needed. This is given as yet another schema: \[ \begin{array}{l} \text{Connect} \\ \text{Absgroup} \\ \text{Concgroup} \\ \text{members} = \text{ran} \text{memseq} \\ \text{exec} = \{ y : \text{NAMES} \mid y \in \text{ran} \text{memseq} \wedge \text{stat} y = \text{ex} \} \end{array} \] At this point the consistency of \( \text{Connect} \) (that its predicates do not allow proving \( \text{false} \)) shows that this mapping is reasonable. When operations are added, we use a combination of data and operation refinement. If we assume an abstract state \( \text{AS} \) and a concrete state \( \text{CS} \), each with an initialization operation \( \text{initAS} \) and \( \text{initCS} \), respectively, and other pairs of operations \( \text{Aop} \) and \( \text{Cop} \), plus a functional mapping between the states in a schema \( \text{Connect} \), the requirements for a correct refinement are to show: for the initial states \[ \text{initCS} \land \text{Connect} \vdash \text{initAS} \] for each pair \( Aop \) and \( Cop \) \[ \text{pre Aop} \land \text{Connect} \vdash \text{pre Cop} \] \[ \text{pre Aop} \land \text{Connect} \land \text{Cop} \land \text{Connect}' \vdash \text{Aop} \] Returning to the example, on the abstract level, a possible initialization would be \[ \begin{array}{l} \text{initabs} \\ \text{Absgroup'} \\ \text{members'} = \{\} \\ \text{exec'} = \{\} \end{array} \] and a typical operation is: \[ \begin{array}{l} \text{Abselect} \\ \Delta \text{Absgroup} \\ x? : \text{NAMES} \\ x? \in \text{members} \\ \#(\text{exec} \cup \{x?\}) \leq \text{limit} \\ \text{exec'} = \text{exec} \cup \{x?\} \\ \text{members'} = \text{members} \end{array} \] The corresponding schemas on the concrete level could be: \[ \begin{array}{l} \text{initcon} \\ \text{Congroup'} \\ \text{memseq'} = \{\} \\ \text{stat'} = \{\} \end{array} \] and an operation: \[ \begin{align*} \text{Concelect} \\ \Delta \text{Congroup} \\ x? : \text{NAMES} \\ x? \in \text{ran memseq} \\ (\text{stat } x? = \text{ex}) \lor \#\{y : \text{NAMES} \mid y \in \text{ran memseq} \land \text{stat } y = \text{ex}\} < \text{limit} \\ \text{stat}' = \text{stat} + \{x? \mapsto \text{ex}\} \end{align*} \] It is left to the reader to prove that these constitute a correct refinement according to the criteria above. The refinement calculus stays within the Z formalism. Little attention has been paid to the question of whether a C implementation satisfies a Z specification. However, it is possible to adopt the Larch division to a core specification notation along with interface languages that use the notation to augment e.g., input/output specifications of C modules. This possibility will be more closely examined in a later chapter.
{"Source-Url": "http://webcourse.cs.technion.ac.il/236368/Spring2016/ho/WCFiles/chap4_zedchap.pdf", "len_cl100k_base": 10460, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 65370, "total-output-tokens": 11826, "length": "2e13", "weborganizer": {"__label__adult": 0.0002989768981933594, "__label__art_design": 0.0003895759582519531, "__label__crime_law": 0.00026607513427734375, "__label__education_jobs": 0.0008730888366699219, "__label__entertainment": 6.949901580810547e-05, "__label__fashion_beauty": 0.00012314319610595703, "__label__finance_business": 0.0003376007080078125, "__label__food_dining": 0.0003483295440673828, "__label__games": 0.0004329681396484375, "__label__hardware": 0.0011396408081054688, "__label__health": 0.0003821849822998047, "__label__history": 0.0002765655517578125, "__label__home_hobbies": 0.00013816356658935547, "__label__industrial": 0.0007410049438476562, "__label__literature": 0.00036978721618652344, "__label__politics": 0.00024008750915527344, "__label__religion": 0.0005474090576171875, "__label__science_tech": 0.05926513671875, "__label__social_life": 9.22083854675293e-05, "__label__software": 0.00936126708984375, "__label__software_dev": 0.92333984375, "__label__sports_fitness": 0.00022077560424804688, "__label__transportation": 0.0005846023559570312, "__label__travel": 0.00017404556274414062}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39400, 0.00757]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39400, 0.64738]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39400, 0.87078]], "google_gemma-3-12b-it_contains_pii": [[0, 1350, false], [1350, 3154, null], [3154, 4969, null], [4969, 7195, null], [7195, 9568, null], [9568, 11543, null], [11543, 13483, null], [13483, 15003, null], [15003, 17152, null], [17152, 18715, null], [18715, 20113, null], [20113, 22066, null], [22066, 24110, null], [24110, 25066, null], [25066, 25859, null], [25859, 26910, null], [26910, 27867, null], [27867, 30016, null], [30016, 32292, null], [32292, 34295, null], [34295, 36065, null], [36065, 37572, null], [37572, 38525, null], [38525, 39400, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1350, true], [1350, 3154, null], [3154, 4969, null], [4969, 7195, null], [7195, 9568, null], [9568, 11543, null], [11543, 13483, null], [13483, 15003, null], [15003, 17152, null], [17152, 18715, null], [18715, 20113, null], [20113, 22066, null], [22066, 24110, null], [24110, 25066, null], [25066, 25859, null], [25859, 26910, null], [26910, 27867, null], [27867, 30016, null], [30016, 32292, null], [32292, 34295, null], [34295, 36065, null], [36065, 37572, null], [37572, 38525, null], [38525, 39400, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39400, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 39400, null]], "pdf_page_numbers": [[0, 1350, 1], [1350, 3154, 2], [3154, 4969, 3], [4969, 7195, 4], [7195, 9568, 5], [9568, 11543, 6], [11543, 13483, 7], [13483, 15003, 8], [15003, 17152, 9], [17152, 18715, 10], [18715, 20113, 11], [20113, 22066, 12], [22066, 24110, 13], [24110, 25066, 14], [25066, 25859, 15], [25859, 26910, 16], [26910, 27867, 17], [27867, 30016, 18], [30016, 32292, 19], [32292, 34295, 20], [34295, 36065, 21], [36065, 37572, 22], [37572, 38525, 23], [38525, 39400, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39400, 0.00955]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
75c4eb743a496b65a3f8031a4b5f330a0a143696
1989 An Algorithm for Computing S-Invariants for High Level Petri Nets Chuang Lin Dan C. Marinescu Report Number: 89-860 https://docs.lib.purdue.edu/cstech/732 This document has been made available through Purdue e-Pubs, a service of the Purdue University Libraries. Please contact epubs@purdue.edu for additional information. AN ALGORITHM FOR COMPUTING S-INVARIANTS FOR HIGH LEVEL PETRI NETS Chuang Lin Dan C. Marinescu CSD-TR-860 February 1989 AN ALGORITHM FOR COMPUTING S-INVARIANTS FOR HIGH LEVEL PETRI NETS Chuang Lin Institute of Information Science and Its Applications State Economic Information Center 58 Sanlihe Road, Beijing CHINA Dan Cristian Marinescu Computer Sciences Department Purdue University West Lafayette, IN 47907 USA Abstract Net invariants and reachability trees are used to investigate dynamic properties of Petri Nets. Both concepts have been generalized for different classes of High Level Petri Nets. In this paper we introduce the compound token and the token flow path concepts. Then we present an algorithm to compute the S-invariants of a High Level Petri Net using the compound token and the token flow path ideas and show that all S-invariants of an HLPN can be generated by a system of integer linear equations without unfolding the net. Keywords: High-Level Petri Nets, algorithm, compound tokens, token flows paths, S-invariants. 1. INTRODUCTION Petri Nets, PNs are one of the most interesting means for specification, modeling and analysis of concurrent systems. There are several classes of methods for analyzing the dynamic behavior of Petri Net models: methods based upon the investigation of the reachability set of the net, methods based upon homomorphic transformation of nets, and methods using linear net invariants. The first class of methods depend upon the ability to construct the reachability set of a PN. For complex systems the reachability set of the Petri Net model is too large to allow any significant analysis because the computations involved become prohibitively expensive. Structural analysis based upon net invariants is an attractive alternative for complex Petri Net models. In this case the analysis is performed on local subnets while ignoring how the entire net behaves. The S-invariants are important for validation of properties as boundedness, mutual exclusions between place markings, liveness, etc. High Level Petri Nets, HLPNs are a family of nets with individual tokens and token variables. Predicate/Transition Nets, Coloured Petri Nets and Relation Nets belong to this category of nets. The main advantage in using HLPNs is derived from their power of representation. Rather complex systems can be modeled as HLPNs in a succinct and readable way. While the descriptive power of High Level Petri Nets is unquestionable, the analysis of such nets raises difficult problems. General and efficient algorithms to construct the reachability trees and the invariants of High Level Petri Nets are unavailable. The subject of invariants for Petri Nets is fairly well understood, see for example [4]. Genrich and Lautenbach [1] have generalized the concept of place invariants also called S-invariants, and transition invariants to different families of High Level nets. The problem of constructing S-invariants for High Level Petri Nets is more complex and no simple, general, and efficient algorithms are known. Some concepts used successfully to obtain invariants for High Level Petri Nets are reviewed in the following. The concepts of quasi-invariants and proper-invariants are introduced in [2] for Predicate Transition Nets. The quasi-invariants can be systematically computed by the Gaussian Elimination algorithm [3], and contain free variables which have to be projected to obtain proper-invariants which give invariant assertion over markings. No general and simple method to find correct ways for projecting is known. The concept of a weight-function is used to compute S-invariants for Coloured Petri Nets [6] through a sequence of transformation rules. But, in general, it is not possible to find all invariants using a simple algorithm. The calculation of semi-flows for predicate transition systems [7], [8], [9] seems more practical, since leads to S-invariants which can be easily interpreted and can be obtained from a finite number of integer vectors. But this method involves additional overhead since all places in a net must have the same arity, and it is not easy to compute semi-flows of tokens with n-elements. In this paper we introduce the concept of a token flow path in order to a. construct invariants with a straightforward interpretation, and b. bind free variables in a simple manner. In case of unary token flow paths transitions connect only two places. The algorithm presented in this paper determines the $S$-invariants in an unary token flow path. The choice of the unary token flow path rather than the $n$-ary token flow path is well motivated. First of all, $n$-ary token flow paths with $n > 1$, seldom exist. Moreover, even when such paths exist, they can be easily constructed from the unary token flow paths. The reminder of the paper is organized as follows. Section 2 reviews the definition of the High Level Petri Nets and introduces the compound token concept. In Section 3 the concept of token flow path is defined and an algorithm to find $S$-invariants in an unary token flow path is presented. An example is presented in Section 4. 2. DEFINITION AND NOTATIONS This section presents definitions, terminology and notations which will be needed throughout the paper. Denote by $N$ the set of all non-negative integers and $Z$ the set of all non-negative integers including zero. Definition 2.1: A multi-set $P'$ is a function defined on a non-empty set $P$, $P' \in [P \rightarrow N]$. Intuitively, a multi-set is a set which can contain multiple occurrences of the same element. It should be pointed out that there are differences among definition of High Level Petri Nets given by the different authors. Here, we present a definition based upon the ones given in [6] and [2]. Definition 2.2: A High-level Petri Net is an 8-tuple $H = (S, T, A, V, D, X, W, M_0)$, where $S$: is a finite, non-empty set of places, $T$: is a finite, non-empty set of transitions, $A$: is a finite, non-empty set of atomic colours, $V$: is a finite set of variables, $D$: is a function defined on $V$ such that for each variable $v \in V$, $D(v)$ is a set of atomic colours called the domain of the variable $v$. X: is a colour function defined on $S \cup T$. $X(S)$ represents the set of place colours for a given maximal arity of places, $n$, $X(S) = \bigcup_{0 \leq k \leq n} A^k$ with $A^k = \{<>\}$. $X$ associates with each place a set of possible token colours, i.e., $\forall p \in S$, $X(p) \subseteq X(S)$. $X(T)$ represents the set of transition colours. $X$ attaches to each transition a set of possible occurrence colours, i.e., $\forall t \in T$, $X(t)$ is the set of substitutions of all variables appearing free in the immediate surrounding arc-expressions of $t$ and $X(T) = \bigcup_{t \in T} X(t)$. $W$: is an arc function defined on $(S \times T) \cup (T \times S)$. It indexes family of multisets over $\bigcup_{0 \leq k \leq n} (A \cup V)^k$, i.e., $$\forall (x,y) \in (S \times T) \cup (T \times S), \quad W_{x,y}: \bigcup_{0 \leq k \leq n} (A \cup V)^k \to N.$$ $M_0$: is called the initial marking of $H$. It is known that each High Level Petri Net with a finite set of colours can be transformed into an equivalent Place/Transition Net obtained through unfolding each place $p$ into the set of places $\{(p,a) | a \in X(p)\}$, and unfolding each transition $t$ into the set of transitions $\{(t,\sigma) | \sigma \in X(t)\}$. Sometime instead of a transition $t$ reference to a step $(t,\sigma)$ is made. A step is regarded as a generic transition. **Definition 2.3** The incidence matrix of a High Level Petri Net $H$, is the matrix $C = (C_{p,t})$ for all $p \in S$, and $t \in T$ with $C_{p,t}$ defined as $C_{p,t} = W_{t,p} - W_{p,t}$. Thus $$C_{p,t}: \bigcup_{0 \leq k \leq n} (A \cup \{X\})^k \to Z.$$ Let $W_{p,t}(\sigma)$ be the multi-set obtained from $W_{p,t}$ by substituting the free variables by atomic colours according to $\sigma$. For instance, if $\sigma = (x \leftarrow b, y \leftarrow a)$ and $W_{p,t} = <a,x> + <x,y>$ then $W_{p,t}(\sigma) = <a,b> + <b,a>$. **Definition 2.4** The transition $t$ is enabled at the marking $M$ iff $$\exists \sigma \in X(t) \text{ such that } W_{p,t}(\sigma) \leq M(p) \quad \forall p \in S .$$ We will say that the step $(t,\sigma)$, rather than transition $t$ is enabled if a colour function $\sigma$ exists, i.e. $\exists \sigma \in X(t)$. The step $(t,\sigma)$ is not enabled if a colour function $\sigma$ does not exist, i.e. $(t,\sigma) = 0$ if $\sigma \notin X(t)$. $\\$ **Definition 2.5** When a step \((t, \sigma)\) is enabled at \(M_1\), it can fire and transform marking \(M_1\) into a directly reachable marking \(M_2\) defined as \[ M_2(p) = M_1(p) - W_{p.t}(\sigma) + W_{t.p}(\sigma) \quad \forall p \in S. \] Denote the \(t\)-th column of the incidence matrix \(C\) by \(C^t\) and note that \(C^t\) is associated with transition \(t\). To underline the fact that \(M_2\) is reached from \(M_1\) when transition \(t\) fires, the definition of the follower marking presented above can be rewritten as \[ M_2 = M_1 + C^t(\sigma). \] Let \(q\) be a step sequence, the equivalent of a firing sequence \[ q = \{(t, \sigma_1), (t_2, \sigma_2), \ldots , (t_k, \sigma_k)\}. \] **Definition 2.6** A marking \(M\) is reachable from the initial marking \(M_0\) iff a step sequence exists such that \[ M = M_0 + C \ast q. \] Here \(C \ast q\) is defined as \[ C \ast q = \sum_{(t, \sigma) \in q} C \ast (t, \sigma) = \sum_{(t, \sigma) \in q} C^t(\sigma). \] **Definition 2.7** The reachability set corresponding to the initial marking \(M_0\) is denoted as \([M_0]\) and it is defined as the set of all markings which are reachable from some initial marking \(M_0\). **Definition 2.8:** \(\mathcal{L}\) is a linear function on the reachability set. \(\mathcal{L} : [S \rightarrow (X(S) \rightarrow N)] \rightarrow Z; \mathcal{L}\) is an \(S\)-invariant if \(\mathcal{L}(M) = \mathcal{L}(M_0)\) for all \(M \in [M_0]\). In a High Level Petri Net the tokens flowing through the system are distinguishable from one another. Such a token has an list of attributes associated with it and it will be called a compound token. A compound token can be regarded as a collection of unary tokens, tokens with one attribute only. In the arc labeling a compound token \(a\) is described by the \(n\)-tuple \(\langle a_1, \ldots , a_n \rangle\), with \(a_i, 1 < i < n\), an atomic colour (or variable). \(a_i\) is the projection of the compound token a along the i-th dimension of the colour space. To have a unitary representation we consider an unary token as an n-tuple with one element only, along one of the directions of the colour space. The * notation is used to indicate colour dimension(s) which are not relevant for the current flow of the compound token. For example, the n-tuples <a, b, c> and <b> are represented by the notation (b,*) whenever only the colour b is of interest. The notation \( M(p)(a,k) \) indicates that place p contains tokens with the atomic colour of interest in the k-th position. 3. TOKEN FLOW PATHS AND S-INVARIANTS FOR HIGH LEVEL NETS To compute S-invariants for a High Level Petri Net we introduce the token flow path concept and describe an algorithm to construct the S-invariants. The introduction of the token flow path concept simplifies the computation of S-invariants for High Level Petri Nets and gives them a clear interpretation. Finally we give an example to illustrate the algorithm. The Token Flow Path Let \( H = (S,T,A,V,D,X,W,M_0) \) be a High Level Petri Net. Then the associated Petri Net \( |H| \) is defined as: \[ |H| = (S,T,|W|,|M_0|) \] with: \[ |W|(x,y) = \begin{cases} 1 & |W(x,y)| > 1 \\ 0 & |W(x,y)| = 0 \end{cases} \quad (x,y) \in (S \times T) \cup (T \times S) \] \[ |M_0|(p) = |M_0(p)| \quad \forall p \in S \] Informally \( |H| \) is obtained from \( H \) by omitting the colours of tokens and the number of tokens. The incidence matrix of \( |H| \) is \[ |C| = (c_{p,t}) \] for all \( p \in S, t \in T \) with \[ c_{p,t} = |W|_{t,p} - |W|_{p,t}. \] \( |H| \) is an ordinary Petri Net and its S-invariants can be computed using for example the Martinez-Silva [3] algorithm. After computing the S-invariants of \( |H| \) we eliminate all non-minimal support S-invariants and focus our attention upon minimal support S-invariants. Let us call \( S_f \) the set of places in such an S-invariant. The places in \( S_f \) are connected through a set of transitions and arcs and they form a subnet \( f \) of \( H \). Such a subnet \( f \) is called a token flow path, \( f = (S_f,T_f,A_f,D,X_f,W_f,M_0) \). An important property of the token flow path is its closure. This means that transition \( t, \forall t \in T_f \) connects places $p_i$, and $p_j$ in $S_f$. $p_i, p_j \in S_f$. First we compute the $S$-invariants of $IH$ and note that the corresponding elements in the $S$-invariant of $IH$ are strictly positive. **Algorithm 1 (Martinez-Silva):** 1. $A := |C|; D := I_n \ (n \text{ is place id}).$ 2. Repeat for $i = 1$ until $i = m \ (m \text{ is transition id}).$ 2.1 Append to the matrix $[D \mid A]$ every rows resulting as a non-negative line combination of row pairs from $[D \mid A]$ that annul the $i$-th column of $A$. 2.2 Eliminate from $[D \mid A]$ the rows in which the $i$-th column of $A$ is non-null. It is guaranteed that this method produces all minimal support invariants of $IH$ according to the following theorem **Theorem 3.1 (Martinez-Silva)** Algorithm 1 generates all the minimal support invariants of $IH$ and each invariant is obtained from a subnet $|f|$. The algorithm to compute the $S$-invariants of a High Level Petri Net $H$ are based upon the following theorem **Theorem 3.2** Let $I$ be an $S$-invariant of $H$. Then $I$ can be expressed as a linear combination of $S$-invariants of some token flow paths of $H$. **Proof:** Unfolding $H$ and all token flow paths we obtain an equivalent net $H'$ and equivalent subnets. $I$ is equivalent with $I'$ an $S$-invariant of $H'$. $I'$ can be expressed as a linear combination of invariants from some equivalent subnets of some token flow paths because all subnets form a basis of $S$-invariants of $H'$ according to the Theorem 3.1. **Computation of S-Invariants** Call $E_f$ the subset of atomic colours present in the token flow path $f$, i.e., $a \in E_f \iff \exists p \in S_f \land c_{p,i} ((a, *)) \neq 0$. The complement of $E_f$ is denoted by $NE_f$. Call $NE_f^c$ the set containing all colours from $NE_f$ and the additional colours from $E_f$. **Theorem 3.3 (Genrich):** Let $C$ be a HLPN-matrix and let $p$ be a place whose arity is $m > 1$. For some $k$ with $1 < k < m$, let $C' = |C|_k^c$ designate the result of projecting all tokens of row $C_p$, in $C$ along the $k$-th position. Let $l': S \rightarrow [X(S) \rightarrow N]$ be a variable-free solution of $I' \ast C' = 0$. Then for every monomial $v : X(S) \rightarrow N$, the linear function $l'_v$ defined by $l'_v(M) = (I' \ast |M|_k^c) \cdot e^v$ is an $S$-invariant. **Lemma 3.4 (Genrich):** The total projection of $C$, $|C|$ is the incidence matrix of an ordinary Place/Transition Net that represents the mere quantitative aspect of HLPN. The total projection \|l\| of every solution of \(I^* C = 0\) is an \(S\)-invariant of the P/T Net. The following propositions can be proved using Theorem 3.3 and Lemma 3.4. The \(S\)-invariants of the token flow path \(f\) will be computed by the following formulas. **Proposition 1:** Let \(u = ((X_p), (Y_{p,a,k}) \mid p \in S_f, a \in E_f, k \in (K)\) be a solution of \[ \begin{align*} \forall a \in E_f \forall t \in T_f \sum_{p \in S_f} (X_p | c_{p,i}| l + \sum_{k \in K} Y_{p,a,k} c_{p,i}((a,*))) &= 0 \\ \forall x \in X \forall a \in D(x) \sum_{p \in S_f} Y_{p,a,k} c_{p,i}((x,*)) &= 0 \end{align*} \] (P1) The corresponding invariant is \[ \forall M \in [M_0] > \sum_{p \in S_f} (X_p | M(p))| l + \sum_{k \in K} Y_{p,a,k} M(p)(a,K)) = c^t \ (\text{constant}) \] There may be two kinds of solutions in the previous expression, one the \(S\)-invariants of the High Level Petri Net and the other the \(S\)-invariants of the P/T Net. **Proposition 2:** Let \(w = ((Z_{p,a,k}) \mid p \in S_f, a \in NE_f^+, k \in K)\) be a solution of \[ \begin{align*} \forall a \in NE_f^+ \forall t \in T \forall x \in X_f \forall a \in D(x) \\ \sum_{p \in S_f} Z_{p,a,k} c_{p,i}((x,*)) &= 0 \end{align*} \] (P2) The corresponding invariant is \[ \forall M \in [M_0] > \sum_{p \in S_f} \sum_{k \in K} Z_{p,a,k} M(p)(a,k)) = c^t \ (\text{constant}) \] All \(S\)-invariants of a HLPN can be constructed from the \(S\)-invariants of every token flow path which form a basis of the solution. **An Example** Figure 1a, presents a HLPN \(H\). We have \(A = \{a,b,c\}, V = \{x,y,z\}\) and \(D(x) = \{a,b\}, D(y) = \{b,c\}, D(z) = \{a,c\}\). The incidence matrix of \(H\), denoted by \(C\) is presented in Figure 1b. We apply Algorithm 1 to \(\|H\|\), and obtain two token flow paths \(f\) and \(g\), shown in Figure 2a and Figure 2b. Applying the formula in Proposition 1 to the flow path \(f\) we obtain the equation groups as follows: \[ \begin{align*} X_{p_1} &+ Y_{p_1,c,1}^t + X_{p_2}^* + Y_{p_3,c,3^*} &= 0 \\ Y_{p_1,c,1}^t + X_{p_2}^* &= 0 \\ Y_{p_3,c,3}^t + Y_{p_4,c,3}^* &= 0 \end{align*} \] (1) \[ \begin{align*} X_{p_1}^{*(-4)} + X_{p_2}^{*4} + Y_{p_3,b,2^{*1}} &= 0 \\ Y_{p_3,b,2^{*1}} &= 0 \\ X_{p_5}^{*(-3)} + X_{p_4}^{*1} &= 0 \end{align*} \] (2) \[ \begin{align*} X_{p_1}^{*(-4)} + X_{p_2}^{*4} + Y_{p_3,a,1^{*1}} &= 0 \\ Y_{p_3,a,1^{*1}} &= 0 \\ X_{p_5}^{*(-3)} + X_{p_4}^{*1} &= 0 \end{align*} \] (3) Then, we can obtain two S-invariants including one special one: \[ |M(p_1)| + 3 |M(p_3)| + 9 |M(p_4)| + 3M(p_1)(c, 1) + M(p_3)(c, 3) + 3M(p_4) \] \[(c, 3) = C^{le} \] \[ |M(p_1)| + |M(p_3)| + 3 |M(p_4)| = C^{le} \] From Proposition 2, we deduce the following equation group the flow path f: \[ \begin{align*} Z_{p_1,a,1^{*(-1)}} + Z_{p_3,a,3^{*3}} &= 0 \\ Z_{p_3,a,3^{*(-3)}} + Z_{p_4,a,3^{*1}} &= 0 \end{align*} \] and the corresponding invariant: \[ 3M(p_1)(a, 1) + M(p_3)(a, 3) + 3M(p_4)(a, 3) = C^{le} \] In the same way, we can deduce the following equation groups and S-invariants for the flow path g: \[ \begin{align*} X_{p_2}^{*(-3)} + X_{p_3}^{*4} + Y_{p_3,a,1^{*1}} &= 0 \\ Y_{p_3,a,1^{*3}} + Y_{p_2,a,1^{*(-2)}} &= 0 \\ X_{p_5}^{*(-3)} + X_{p_4}^{*1} &= 0 \\ Y_{p_3,a,1^{*(-3)}} + Y_{p_4,a,1^{*1}} &= 0 \end{align*} \] (1) and the corresponding invariant: \[ 2 |M(p_2)| + |M(p_3)| + 3 |M(p_4)| + 3M(p_2)(a, 1) + 2M(p_3)(a, 1) + 6M(p_4)(a, 1) = C^{le} \] \[ \begin{align*} X_{p_2}^{*(-3)} + X_{p_3}^{*4} + Y_{p_3,b,2^{*1}} &= 0 \\ Y_{p_3,b,2^{*3}} + Y_{p_2,b,1^{*(-1)}} &= 0 \\ Y_{p_2,b,1^{*(-2)}} + Y_{p_3,b,1^{*3}} &= 0 \\ X_{p_5}^{*(-3)} + X_{p_4}^{*1} &= 0 \\ Y_{p_3,b,1^{*(-3)}} + Y_{p_4,b,1^{*1}} &= 0 \\ Y_{p_5,b,2^{*(-3)}} + Y_{p_4,b,2^{*1}} &= 0 \end{align*} \] (2) and the corresponding invariant: \[ \begin{align*} 3 |M(p_2)| & + 2 |M(p_3)| + 6 |M(p_4)| + 3 |M(p_2)|_{(b, 1)} + \\ 2 |M(p_3)|_{(b, 1)} + |M(p_3)|_{(b, 2)} + 6 |M(P_4)|_{(b, 1)} + 3 |M(p_4)|_{(b, 2)} = C_{1e} \end{align*} \] \[ X_{p_2}^{*<3>} + X_{p_3}^{*<3>} + Y_{p_3,c,3^*1} = 0 \\ Y_{p_3,c,3^*1} = 0 \\ X_{p_3}^{*<3>} + X_{p_3}^{*<3>} = 0 \] (3) and the corresponding invariant: \[ 4 |M(p_2)| + 3 |M(p_3)| + 9 |M(p_4)| = C_{1e} \] \[ Z_{p_2,c,1^{*<1>}} + Z_{p_3,c,2^{*3}} = 0 \\ Z_{p_3,c,2^{*<3}>} + Z_{p_4,c,2^{*1}} = 0 \] (4) and the corresponding invariant: \[ 3 |M(p_2)|_{(c, 1)} + |M(p_3)|_{(c, 2)} + 3 |M(p_4)|_{(c, 2)} = C_{1e} \] 4. CALCULATION OF S-INVARIANTS FOR THE PHILOSOPHER SYSTEM To illustrate the simplicity and the power of the algorithm described in this paper we consider the philosopher system, consisting of five philosophers who alternatively think and eat. There are only five chopsticks on a circular table and there is one chopstick between any two philosophers. Each philosopher needs to use the two chopsticks adjacent to him when he eats. Obviously two neighbors cannot eat at the same time. The philosopher system can be described by the net shown in Figure 3. The model has fifteen places and ten transitions, all indexed on variable \( i \), \( i \in [1, 5] \) in the following description: - \( T_i \): Is the "thinking" place. If \( T_i \) holds a token, the \( i \)-th philosopher is thinking or waiting for chopsticks. - \( E_i \): Is the "eating" place. If \( E_i \) holds a token, the \( i \)-th chopstick is free. - \( F_i \): Is the "free chopsticks" place. If \( F_i \) holds a token, the \( i \)-th chopstick is free. - \( G_i \): Is the "getting chopsticks" transition. - \( R_i \): Is the "releasing chopsticks" transition. For this Petri Net, we can use algorithm 1 to get the following ten S-invariants which are linearly independent and form a basis: 1. \( M(T_1) + M(E_1) = 1 \) 2. \( M(T_2) + M(E_2) = 1 \) 3. \( M(T_3) + M(E_3) = 1 \) 4. \( M(T_4) + M(E_4) = 1 \) 5. \( M(T_5) + M(E_5) = 1 \) 6. \( M(E_1) + M(E_3) + M(F_1) = 1 \) 7. \( M(E_1) + M(E_2) + M(F_2) = 1 \) 8. \( M(E_1) + M(E_1) = 1 \) 9. \( M(E_1) + M(E_2) = 1 \) 10. \( M(E_1) + M(E_3) = 1 \) If we fold this PN, we get a model of the system described by the HLPN in Figure 4. In this model the place $T$ stands for the set $\{T_i\}$ and $F$ for $\{F_i\}$, the transition $G$ stands for the set $\{G_i\}$, and the transition $R$ represents the set $\{R_i\}$ with $i \in [1,5]$. From the incidence matrix $C$ of this HLPN, we obtain the token flow paths shown in Figure 5. From Proposition 1 and Proposition 2, we compute the following ten $S$-invariants for the HLPN system without unfolding the net. They are equivalent with those of the P/T system. (1) $M(T)(p_1,1) + M(E)(p_1,1) = 1$ (2) $M(T)(p_2,1) + M(E)(p_2,1) = 1$ (3) $M(T)(p_3,1) + M(E)(p_3,1) = 1$ (4) $M(T)(p_4,1) + M(E)(p_4,1) = 1$ (5) $M(T)(p_5,1) + M(E)(p_5,1) = 1$ (6) $M(E)(f_1,2) + M(E)(f_1,3) + M(F)(f_1,1) = 1$ (7) $M(E)(f_2,2) + M(E)(f_2,3) + M(F)(f_2,1) = 1$ (8) $M(E)(f_3,2) + M(E)(f_3,3) + M(F)(f_3,1) = 1$ (9) $M(E)(f_4,2) + M(E)(f_4,3) + M(F)(f_4,1) = 1$ (10) $M(E)(f_5,2) + M(E)(f_5,3) + M(F)(f_5,1) = 1$ The ten $S$-invariants presented above can be re-written in a compact form as $$M(T)(p_i,1) + M(E)(p_i,1) = 1 \quad (1)$$ $$M(E)(f_i,2) + M(E)(f_i,3) + M(F)(f_i,1) = 1 \quad (2)$$ for $i \in [1,5]$ 5. CONCLUSIONS In this paper we have introduced the concept of a compound token and token flow path and based upon these two concepts and we have presented a simple and efficient algorithm for the computation of $S$-invariants of High-Level Petri Nets. Using our formalism the $S$-invariants have a simple interpretation. A software package based upon this algorithm has been implemented to prove the viability and simplicity of the algorithm. REFERENCES \[\begin{align*} P_1 & \rightarrow x + 3c \rightarrow p_2 \rightarrow 2x + y \\ & \downarrow \quad \downarrow \quad \downarrow \\ & p_3 \rightarrow 3 \langle x, y, z \rangle \rightarrow t_2 \rightarrow \langle x, y, z \rangle \rightarrow p_4 \\ & t_1 \downarrow \downarrow \downarrow \downarrow \downarrow \\ & \langle x, y, z \rangle + \langle a, b, c \rangle \\ \end{align*}\] Figure 1a. \[ \begin{array}{c|ccc} C & t_2 & t_2 \\ \hline p_1 & -(z + 3c) & \\ p_2 & -(2x + y) & \\ p_3 & 3 \langle x, y, z \rangle + \langle z, b, c \rangle & -3 \langle x, y, z \rangle \\ p_4 & & \langle x, y, z \rangle \\ \end{array} \] Figure 1b. Figure 1. An example of a High Level Petri Net Figure 2a. Figure 2. The token flow paths for the HLPN in Figure 1 Figure 3. The Petri Net model of the philosopher system Figure 4. The High Level Petri Net model of the philosopher system <table> <thead> <tr> <th>C</th> <th>G</th> <th>R</th> </tr> </thead> <tbody> <tr> <td>T</td> <td>(-p_i)</td> <td>(p_i)</td> </tr> <tr> <td>E</td> <td>(&lt;p_i, f_i, f_{i\Theta 1}&gt;)</td> <td>(&lt;p_i, f_i, f_{i\Theta 1}&gt;)</td> </tr> <tr> <td>F</td> <td>(f_i + f_{i\Theta 1})</td> <td>(f_i + f_{i\Theta 1})</td> </tr> </tbody> </table> Figure 5a Figure 5b Figure 5. Two token flow paths of the philosopher system
{"Source-Url": "https://docs.lib.purdue.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1731&context=cstech", "len_cl100k_base": 8214, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38804, "total-output-tokens": 9813, "length": "2e13", "weborganizer": {"__label__adult": 0.0003905296325683594, "__label__art_design": 0.0004684925079345703, "__label__crime_law": 0.0005211830139160156, "__label__education_jobs": 0.00174713134765625, "__label__entertainment": 9.804964065551758e-05, "__label__fashion_beauty": 0.0002341270446777344, "__label__finance_business": 0.0005950927734375, "__label__food_dining": 0.0004987716674804688, "__label__games": 0.0007052421569824219, "__label__hardware": 0.0016603469848632812, "__label__health": 0.0010919570922851562, "__label__history": 0.0003707408905029297, "__label__home_hobbies": 0.00022780895233154297, "__label__industrial": 0.0010347366333007812, "__label__literature": 0.0004177093505859375, "__label__politics": 0.0003669261932373047, "__label__religion": 0.0007119178771972656, "__label__science_tech": 0.2198486328125, "__label__social_life": 0.00015485286712646484, "__label__software": 0.009033203125, "__label__software_dev": 0.75830078125, "__label__sports_fitness": 0.0003769397735595703, "__label__transportation": 0.001018524169921875, "__label__travel": 0.00023114681243896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26013, 0.05233]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26013, 0.55411]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26013, 0.79157]], "google_gemma-3-12b-it_contains_pii": [[0, 505, false], [505, 626, null], [626, 1552, null], [1552, 4695, null], [4695, 6720, null], [6720, 9085, null], [9085, 11056, null], [11056, 13314, null], [13314, 15815, null], [15815, 17917, null], [17917, 19537, null], [19537, 21757, null], [21757, 23817, null], [23817, 25510, null], [25510, 25683, null], [25683, 26013, null]], "google_gemma-3-12b-it_is_public_document": [[0, 505, true], [505, 626, null], [626, 1552, null], [1552, 4695, null], [4695, 6720, null], [6720, 9085, null], [9085, 11056, null], [11056, 13314, null], [13314, 15815, null], [15815, 17917, null], [17917, 19537, null], [19537, 21757, null], [21757, 23817, null], [23817, 25510, null], [25510, 25683, null], [25683, 26013, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26013, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26013, null]], "pdf_page_numbers": [[0, 505, 1], [505, 626, 2], [626, 1552, 3], [1552, 4695, 4], [4695, 6720, 5], [6720, 9085, 6], [9085, 11056, 7], [11056, 13314, 8], [13314, 15815, 9], [15815, 17917, 10], [17917, 19537, 11], [19537, 21757, 12], [21757, 23817, 13], [23817, 25510, 14], [25510, 25683, 15], [25683, 26013, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26013, 0.01408]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
d6864303af1deff311f7678b1592eb7967475f2a
Synchronizing Behavioural Mismatch in Software Composition Carlos Canal, Pascal Poizat, and Gwen Salaün 1 University of Málaga, Department of Computer Science Campus de Teatinos, 29071 Málaga, Spain canal@lcc.uma.es 2 IBISC FRE 2873 CNRS – University of Évry Val d’Essonne, Genopole Tour Évry 2, 523 place des terrasses de l’Agora, 91000 Évry, France Pascal.Poizat@ibisc.univ-evry.fr 3 VASY project, INRIA Rhône-Alpes, France 655 avenue de l’Europe, 38330 Montbonnot Saint-Martin, France Gwen.Salaun@inrialpes.fr Abstract. Software Adaptation is a crucial issue for the development of a real market of components promoting software reuse. Recent work in this field has addressed several problems related to interface and behavioural mismatch. In this paper, we present our proposal for software adaptation, which builds on previous work overcoming some of its limitations, and makes a significant advance to solve pending issues. Our approach is based on the use of synchronous vectors and regular expressions for governing adaptation rules, and is supported by dedicated algorithms and tools. 1 Introduction Component-Based Software Engineering (CBSE) focuses on composition and reuse, aiming to develop a market of software components, in which customers select the most appropriate software piece depending on its technical specification [6]. The development of such a market has always been one of the major concerns of Software Engineering, but it has never become a reality. The reason is that we cannot expect that any given software component perfectly matches the needs of a system where it is trying to be integrated. Software is never reused “as it is”, especially in case of legacy code, and a certain degree of adaptation is always required [16]. To deal with these problems a new discipline, Software Adaptation, which is emerging, is concerned with providing techniques to arrange already developed pieces of software, in order to reuse them in new systems [7]. Software Adaptation promotes the use of adaptors—specific computational entities guaranteeing that components will interact in the right way. * This work has been partly funded by the European Network of Excellence on AOSD, AOSD-Europe IST-2-004349-NOE. CBSE postulates that a component must be reusable from its interface [20], which in fact constitutes its full technical specification. Hence, we have to provide components with a specification that helps in the process of adapting and reusing them. The intended adaptation will then take the form of a mapping among the interface descriptions of the components involved. The characteristics and expressiveness of the language used for interface description determines the degree of interoperability we can achieve using it, and the kind of problems that can be solved. We can distinguish between several levels of interoperability, and accordingly of interface description [8]: signature level (service names and types), behavioural level (interaction protocols), semantic level (functional specification of what the component actually does) and service level (non functional properties such as quality of service). At each one, mismatch can occur [8] and would have to be corrected. Currently, industrial component models only tackle the signature level, with Interface Description Languages (IDLs). Although (automatic) adaptation in the semantic and service levels still remains uncertain, several approaches have been presented for extending component interfaces with behaviour, thus resulting in what we may call a Behavioural IDL (BIDL) (e.g., WS-BPEL [1] for web services). In this paper, we focus on mismatch appearing at the behavioural level. Intuitively, it means that two (or more) components cannot—as they are—interact till they reach correct termination states. To compensate such behavioural incompatibilities, we propose first to use synchronous vectors as the mapping language to make explicit communications on different message names. Second, we extend our notation to enable writing regular expressions of vectors. Such a mapping notation is convenient to describe in an abstract way more advanced adaptation scenarios such as reordering of messages. Figure 1 gives a graphical overview of our method for adaptation. **Fig. 1.** Overview of our approach for adaptation of incompatible components The remainder of the paper is organized as follows, Section 2 formally introduces our component interface model, and defines interface mismatch by means of synchronous products. Section 3 presents our approach to component adap- tion, which combines the points in favour of different adaptation approaches, while trying to overcome their limitations. Our proposals for behavioural adaptation with or without message reordering are supported by dedicated algorithms, and in both cases the adaptation mappings rely on synchronous vectors. Next, Section 4 extends our initial mapping notation with regular expressions, enabling complex policies for applying the adaptation vectors. In Section 5, we survey the more advanced proposals for software adaptation, and compare ours to them. Finally, Section 6 draws up the main conclusions of this work and sketches some future tasks that will be accomplished to extend its results. 2 Interfaces and Mismatch 2.1 Component Interfaces Component interfaces are given using a signature and a behavioural interface. **Definition 1 (Signature).** A signature Σ is a set of operation profiles. This set is a disjoint union of provided operations and required operations. An operation profile is simply the name of an operation, together with its argument types, its return type and the exceptions it raises. This definition naturally corresponds to the signature definitions in component based models such as CCM or J2EE. Such signatures are defined using an IDL. For the sake of simplicity in the presentation, in this paper we do not deal with operation arguments, return values or exceptions. We also take into account behavioural interfaces through the use of Labelled Transition Systems (LTSs). **Definition 2 (LTS).** A Labelled Transition System is a tuple \((A, S, I, F, T)\) where: \(A\) is an alphabet (set of events), \(S\) is a set of states, \(I \in S\) is the initial state, \(F \subseteq S\) are final states, and \(T \subseteq S \times A \times S\) is the transition function. The alphabet of the LTS is built on the signature. This means that for each provided operation \(p\) in the signature, there is an element \(p^?\) in the alphabet, and for each required operation \(r\), an element \(r^!\). As in CCS, \((a, a)\) denote complementary actions — i.e., if \(a\) is \(p^?\) (respectively \(r^!\)), then \(a\) is \(p^!\) (respectively \(r^?\)). LTSs are adequate as far as user-friendliness and development of formal algorithms are concerned. However, higher-level behavioural languages such as process algebras can be used to define behavioural interfaces in a more concise way. In this paper, we use as a BIDL the part of the CCS notation restricted to sequential processes which can be translated into LTS models: \(P ::= 0 | a? . P | a! . P | P1 * P2 | \mathcal{A}\), where \(0\) denotes a do-nothing process, \(a? . P\) a process which receives \(a\) and then behaves as \(P\), \(a! . P\) a process which sends \(a\) and then behaves as \(P\), \(P1 * P2\) a process which may act either as \(P1\) or \(P2\), and \(\mathcal{A}\) denotes the call to a process defined by an agent definition equation \(A = P\). As process algebras do not enable to define initial and final states, we extend this CCS notation to tag processes with initial (i) and final (f) attributes. Finally, 0 is often omitted in processes (e.g., a! . b! [f] is used for a! . b! . 0[f]). **Example 1.** Consider a client that repetitively sends a query and its argument, and then waits for an acknowledgement, quitting with an end!, and a server repetitively waiting for a query and a value, then returning a given service: \[ \begin{align*} \text{Client[i]} &= \text{query}!. \text{arg}! . \text{ack}? . \text{Client} + \text{end}![f] \\ \text{Server[i,j]} &= \text{query}? . \text{value}? . \text{service}! . \text{Server} \end{align*} \] The LTSs for these two components are given below with initial and final states respectively marked by input arrows and black circles. ![Fig. 2. A simple client/server system](image) ### 2.2 Behavioural Mismatch Various definition of behavioural mismatch have been proposed in the field of software adaptation and software architecture analysis [8]. We build on the most commonly accepted one, namely deadlock-freedom. The first step is to define the semantics of a system made up of several identified components. This semantics can be given, following work by Arnold [2] using synchronous product. **Definition 3 (Synchronous Product).** The synchronous product of \( n \) LTSs \( L_i = (A_i, S_i, I_i, F_i, T_i) \), \( i \in 1..n \), is the LTS \( (A, S, I, F, T) \) such that: - \( A \subseteq \Pi_{i=1}^n A_i \), \( S \subseteq \Pi_{i=1}^n S_i \), \( I = (I_1, \ldots, I_n) \), - \( F \subseteq \{ (s_1, \ldots, s_n) \in S \mid \bigwedge_{i=1}^n s_i \in F_i \} \), - \( T \) is defined using the following rule: \[ \forall (s_1, \ldots, s_n) \in S, \forall i, j \in 1..n, i < j \text{ such that } \exists (s_i, a, s'_i) \in T_i, \exists (s_j, a, s'_j) \in T_j, \text{ then } (x_1, \ldots, x_n) \in S \text{ and } ((s_1, \ldots, s_n), (l_1, \ldots, l_n), (x_1, \ldots, x_n)) \in T, \text{ where } \forall k \in 1..n, l_k = \{ a \text{ if } k = i, \ a \text{ if } k = j, \ \varepsilon \text{ otherwise } \} \] \[ x_k = \{ s'_i \text{ if } k = i, \ s'_j \text{ if } k = j, \ s_k \text{ otherwise } \} \] We are now able to characterize behavioural mismatch by means of deadlock. **Definition 4 (Deadlock State).** Let \( L = (A, S, I, F, T) \) be an LTS. A state \( s \) is a deadlock state for \( L \), noted \( \text{dead}(s) \), iff it is in \( S \), not in \( F \) and has no outgoing transitions: \( s \in S \land s \not\in F \land \forall l \in A, s' \in S . (s, l, s') \in T \). **Definition 5 (Deadlock Mismatch).** An LTS \( L = (A, S, I, F, T) \) presents a deadlock mismatch if there is a state \( s \) in \( S \) such that \( \text{dead}(s) \). To check if a system made up of several components presents behavioral mismatch, its synchronous product is computed and then Definition 5 is used. **Example 2.** Taking Example 1, we obtain the following synchronous product: ![Diagram](query!,query?) **Fig. 3.** Synchronous product for the client/server system in Figure 2 Note that the deadlock is caused by (i) the client required service `end!` which has no counterpart in the server, and (ii) name mismatching between the client required service `arg!` and the server provided service `value?`. We may now define what is a correct adaptor for a system. An adaptor is given by an LTS which, put into a non-deadlock-free system yields a deadlock-free one. For this to work, the adaptor has to preempt all the component communications. Therefore, prior to the adaptation process, component service names may have to be renamed prefixing them by the component name, e.g., `c:service!`. The product we have defined here is common in the community and hence is supported by tools such as the CADP toolbox [9]. Our deadlock definition however is slightly different from the one used in these tools, since it has to distinguish between success (deadlock in a final state), and failure (deadlock in a non-final state). Mismatch detection can be automatically checked by CADP up to the adding within component interfaces of specific loop transitions labelled with `accept` over final states. Then the EXP.OPEN tool [13] of CADP is used to perform a full matching product between the component interfaces. ## 3 Adaptation based on Synchronous Vectors ### 3.1 Synchronizing with Vectors The first thing to solve in adaptation is impossible communication due to different event/message names. Our idea is to use synchronous vectors as a way to denote a morphism between event names in different components. Vectors generalize synchronous product by expressing not only synchronization between processes on the same event names (a and $\bar{a}$ in Definition 3), but more general correspondences between the events of the process involved. **Definition 6 (Vector).** A synchronous vector (or vector for short) for a set of $Id$ indexed components $I_i = (A_i, S_i, I_i, F_i, T_i)$, $i \in Id$, is a tuple $(e_i)$ with $e_i \in A_i \cup \{\varepsilon\}$, $\varepsilon$ meaning that a component does not participate in a synchronization. Note that vectors are simple correspondences between events. Extensions can be easily defined to consider relations between events with data. Definition 7 (Synchronous Vector Product), The synchronous vector product of \( n \) LTSs \( L_i = (A_i, S_i, I_i, F_i, T_i), i \in 1..n \) with a set of vectors \( V \), is the LTS \((A, S, I, F, T), \) denoted by \( \Pi(L_i, V) \), such that: - \( A \subseteq \Pi_{i\in1..n} A_i \), \( S \subseteq \Pi_{i\in1..n} S_i \), \( I = (I_1, \ldots, I_n) \), - \( F \subseteq \{(s_1, \ldots, s_n) \in S \mid \bigwedge_{i\in1..n} s_i \in F_i \} \), - \( T \) is defined using the following rule: \[ \exists (s_1, \ldots, s_n) \in S \text{ and } \exists v = (l_1, \ldots, l_n) \in V \text{ such that,} \] \[ \forall l_i \in v \ s'_i = s_i \text{ if } l_i = \varepsilon \text{ and } \exists (s_i, l_i, s'_i) \in T_i \text{ otherwise.} \] 3.2 Behavioural Adaptation without Reordering We first address adaptation where only event names mismatch is taken into account, that is impossible communications due to different message names. Our algorithm takes as input the Id indexed set of components LTSs \( L_i \) of the systems and a mapping which is a synchronous vector \( V \). 1. compute the product \( P = (A_P, S_P, I_P, F_P, T_P) = \Pi(L_i, V) \) 2. obtain \( P_{\text{rest}} = (A_{P_{\text{rest}}}, S_{P_{\text{rest}}}, I_{P_{\text{rest}}}, F_{P_{\text{rest}}}, T_{P_{\text{rest}}}) \) from \( P \) recursively removing transitions and states yielding deadlocks: find a state \( s \) such that \( \text{dead}(s) \), remove \( s \) and any transition \( t \) with target \( s \), and do this until there is no more such \( s \) in the LTS. 3. from \( P_{\text{rest}} \), build the adaptor \( A = (A_{P_{\text{rest}}}, S_{P_{\text{rest}}} \cup S_{\text{add}}, I_{P_{\text{rest}}}, F_{P_{\text{rest}}}, T_{A}) \) where \( S_{\text{add}} \) and \( T_{A} \) are defined as follows. For each \( t = (s = (s_1, \ldots, s_n), (l_1, \ldots, l_n), s' = (s'_1, \ldots, s'_n)) \) in \( T_{P_{\text{rest}}} \), let \( L_{\text{rec}} = \{l! \mid l! \in (l_1, \ldots, l_n)\} \) and \( L_{\text{em}} = \{l? \mid l? \in (l_1, \ldots, l_n)\} \). Let then \( \text{Seq}_{\text{rec}} \) be the set of all permutations over \( L_{\text{rec}} \) and \( \text{Seq}_{\text{em}} \) be the set of all permutations over \( L_{\text{em}} \). For each couple \((R, E)\) in \( \text{Seq}_{\text{rec}} \times \text{Seq}_{\text{em}}, R = (r_1, \ldots, r_{\text{rec}}) \) and \( E = (e_1, \ldots, e_{\text{em}}) \), seq = \((r_1, \ldots, r_{\text{rec}}), e_1, \ldots, e_{\text{em}}\), construct the transaction \[ s = q_0 \xrightarrow{\text{seq}[1]} q_1 \ldots q_k \xrightarrow{\text{seq}[k+1]} q_{k+1} \ldots q_{n-1} \xrightarrow{\text{seq}[n]} s' = q_n \] adding each \( q_{k+1} \) in \( S_{\text{add}} \) and each \( q_k \xrightarrow{\text{seq}[k+1]} q_{k+1} \) \((k \in 0..n)\) in \( T_{A} \). This algorithm builds the most general adaptor in the sense that it simulates any other adaptor for the mismatching system. Its complexity lies mainly in the synchronous product construction \( O(|S|^n) \) where \( S \) is the largest set of states. 3.3 Behavioural Adaptation with Reordering Let us now extend the domain of adaptation problems we deal with. The goal is to also address behavioural mismatch with reordering, that is, the incompatible ordering of the events exchanged. Indeed, our behavioural adaptation proposal above would yield an empty adaptor in presence of such behavioural mismatch, concluding that adaptation is not possible. In this case, the adaptation process may try to reorder protocol events in-between the components. To this purpose, we present a second approach which complements the first one. However, it does not replace it as the process may not agree on message reordering. This behavioural adaptation approach is based on previous works dedicated to the analysis of component queue boundedness [14]. In order to accommodate behavioural mismatch, the events received by the adaptor are de-synchronized from their emission. Our algorithm can be simulated by a translation of the problem into Petri nets [15]. The main advantage of such an approach is that it is equipped with efficient tools. We first proceed by constructing a Petri net representation of the assumptions the components make on their environment (by mirroring their behavioural interfaces), and then build causal dependences between the events received and sent by the adaptor accordingly to the mapping, given under the form of synchronous vectors. This allows us to build an adaptor which accommodates both behavioural mismatch (with or without reordering). 1. for each component $i$ with LTS $L_i$, for each state $s_j \in S_i$, add a place $\text{Control}-i-s_j$ 2. for each component $i$ with initial state $I_i$, put a token in $\text{Control}-i-I_i$ 3. for each $a! \in \bigcup_i A_i$, add a place $\text{Rec}-a$ 4. for each $a?$ in $\bigcup_i A_i$, add a place $\text{Em}-a$ 5. for each component $i$ with LTS $L_i$, for each $(s, i, s') \in T_i$: - add a transition with label $I_i$, one arc from place $\text{Control}-i-s$ to the transition and one arc from the transition to place $\text{Control}-i-s'$ - if $l$ has the form $a!$ then add one arc from the transition to place $\text{Rec}-a$ - if $l$ has the form $a?$ then add one arc from place $\text{Em}-a$ to the transition 6. for each vector $v = (l_1, \ldots, l_n)$ in $V$: - add a transition with label $\tau$ - for each $l_i$ with form $a!$, add one arc from place $\text{Rec}-a$ to the transition - for each $l_i$ with form $a?$, add one arc from the transition to place $\text{Em}-a$ 7. for each tuple $(f_1, \ldots, f_n)$, $f_i \in F_i$, of final states, add a (loop) accept transition with arcs from and to each of the tuple $f_i$ Once this Petri net encoding has been performed, we compute its marking graph. If it is finite (e.g., for non recursive adaptors) then it gives a behavioural description of the adaptor. If not (it cannot be computed in finite time), then we compute the coverability graph of the net. Note that due to the overapproximation of such a graph, we add a guard $[\#\text{Em}-a \geq 1]$ ($\#\text{Em}-a$ meaning the number of tokens in place $\text{Em}-a$) on any $a!$ transition in this graph leaving a state where $\#\text{Em}-a$ is $\omega$. In both cases (marking or coverability graph), step 2 of the algorithm in Section 3.2 has to be performed on the adaptor obtained. The complexity of this algorithm lies mainly in the marking or coverability graph construction which is exponential [17]. This algorithm is supported by tools. We have made successful experiments with the TINA tool [3] to generate marking and coverability graphs. Our approach yields graphs which can be too large for a human reader. We simplify the adaptor LTS passing the resulting output file to CADP and performing a $\tau \ast a$ reduction on it to remove the meaningless $\tau a$ transitions it contains. ### 3.4 Application We here present an example following the behavioural adaptation technique above, **Example 3.** Suppose we have a client $\text{Client}[i]=\text{req}.!\text{arg}.!\text{ack}?!$ and a server $\text{Server}[i]=\text{value}?..\text{query}?..\text{service}!*$ with vectors $<\text{req},\text{query}>$, $<\text{arg},\text{value}>$ and $<\text{ack},\text{service}>$. Such an example is typical of clients and servers which follow different standards for the order of sending subservice elements. The Petri net encoding (see Section 3.3) of the system is: ![Petri net encoding of a simple client/server system](image) Computing the marking graph, we obtain an LTS with 13 states and 16 transitions (Fig. 5, left), which once reduced yields the correct adaptor (Fig. 5, right)\(^1\). We want to stress that our adaptation proposal is an automatic process. For the sake of the presentation, we have shown here a simple example for which the adaptor could be obtained manually. However, using slightly more complex component protocols, the adaptor becomes too large to be obtained by hand. Moreover, the use of regular expressions in the next section will increase the complexity of the adapting process and the need for such automatic techniques. ### 4 Adaptation Patterns In this section, we tackle the problem of adaptation mappings which may change over time. In the following, we present a way to express such mappings using regular expressions (regex), and then update our algorithms to deal with them. \(^1\) Note the $i$ which stands in CADP for $\tau a$ transitions, and the accept loop transitions which enable the detection of correct final states. 4.1 Regular Expressions (Regex) of Vectors First, we introduce the syntax for regex. These will be used in place of the basic vector mappings we presented in Section 3. Definition 8 (Vector Regex). Given n LTS $L_i = (A_i, S_i, I_i, F_i, T_i)$, and a set of vectors $V = \{(e_{ij})\}_{j}$ for their adaptation, with $e_{ij} \in A_i \cup \{\varepsilon\}$, a (vector) regex for these LTSs can be generated by the following syntax: $R ::= \nu \text{ (VECTOR)} / R_1.R_2 \text{ (SEQUENCE)} / R_1 + R_2 \text{ (CHOICE)} / R^* \text{ (ITERATION)}$, where $R$, $R_1$, $R_2$ are regex, and $\nu$ is a vector in $V$. A graphical description such as LTS labelled with vectors might be used instead of regular expressions to favour readability and user-friendliness of the notation. Example 4 (Alternating use client). Suppose we have a system formed by one client $C$ and two servers, $S$ and $A$: - $C[1] = \text{end!}[i] + \text{req!}.\text{arg!}.\text{ack?}.C,$ - $S[1,f] = \text{value?}.\text{query?}.\text{service!}.S,$ and - $A[1,f] = \text{value?}.\text{query?}.\text{service!}.A.$ One may want to express in the adaptation mapping that the client accesses the two servers alternatively, and not always the same one. For this, we use the following regex: $(v_{a1}, v_{a2}, v_{a3}, v_{a1}, v_{a2}, v_{a3})^* . v_{end}$ with - $v_{a1} = <\text{req!}, \text{query?}, \varepsilon>, \quad v_{a2} = <\text{req!}, \varepsilon, \text{query?}>, \quad v_{a3} = <\text{req!}, \varepsilon, \text{query?}>,$ - $v_{a2} = <\text{arg!}, \varepsilon, \text{value?}>, \quad v_{a2} = <\text{arg!}, \varepsilon, \text{value?}>,$ - $v_{a3} = <\text{ack?}, \text{service!}, \varepsilon>, \quad v_{a3} = <\text{ack?}, \varepsilon, \text{service!}.$ Example 5 (Connected vs non connected modes). Suppose a client/server system where the client $C$ sends its $id$ only once at login time, while the server $S$ requires an identification every time the client does a request. Here we have: - $C[1] = \text{log!}.\text{Logged}, \text{ with}$ ![Diagram](image-url) The regex describing the adaptation required is now \(v_0 \cdot v_2 \cdot (v_1 \cdot v_2 \cdot v_3)\) * with \(v_0 = \langle \log!, \log? \rangle, v_1 = \langle \varepsilon, \log? \rangle, v_2 = \langle \text{req!}, \text{req?} \rangle, v_3 = \langle \text{ack?}, \text{ack!} \rangle\). 4.2 Behavioural Adaptation without Reordering To be able to update our algorithms for using our new regex mappings\(^2\), we first define how to obtain an LTS from them. This corresponds to the well-known problem of obtaining an automaton which recognizes the language of a regex [10]. The only difference is that the atoms of our regex are vectors and not elements of basic alphabets. Instead of using a regex, one may also use directly the LTS that derives from such regex, (i.e., an LTS where the alphabet corresponds to vectors). We then modify the synchronous vector product to take a regex LTS in place of the vector argument. Definition 9 (Synchronous Vector Product (with regex LTS)). The synchronous vector product (with regex LTS) of \(n\) LTS \(L_i = (A_i, S_i, I_i, F_i, T_i), i \in 1..n\) with a regex LTS \(L_R = (A_R, S_R, I_R, F_R, T_R)\), is the LTS \((A, S, I, F, T)\) such that: - \(A \subseteq A_R \times \prod_{i \in 1..n} A_i, S \subseteq S_R \times \prod_{i \in 1..n} S_i, I = (I_R, I_1, \ldots, I_n),\) - \(F \subseteq \{(s_r, s_1, \ldots, s_n) \in S \mid s_r \in F_R \land \bigwedge_{i \in 1..n} s_i \in F_i\},\) - \(T\) is defined using the following rule: \[(s_r, s_1, \ldots, s_n), (l_r, l_1, \ldots, l_n), (s_{r'}, s_{1'}, \ldots, s_{n'}) \in T\) if \(s_r, s_1, \ldots, s_n \in S\) and \(\exists v \in \{A, S, I, F\} X' = \{\text{cdr}(x) \mid x \in X\} \) and \(T' = \{(\text{cdr}(s), \text{cdr}(l), \text{cdr}(s')) \mid (s, l, s') \in T\}\) with \(\text{cdr}((x_0, x_1, \ldots, x_n)) = (x_1, \ldots, x_n)\). To apply the Section 3.2 algorithm we just have now to discard the first element of the product components, that is, from the LTS \(L = (A, S, I, F, T)\) obtain the LTS \(L' = \text{proj}(L) = (A', S', I', F', T')\) such that \(\forall X \in \{A, S, I, F\} X' = \{\text{cdr}(x) \mid x \in X\}\) and \(T' = \{(\text{cdr}(s), \text{cdr}(l), \text{cdr}(s')) \mid (s, l, s') \in T\}\) with \(\text{cdr}((x_0, x_1, \ldots, x_n)) = (x_1, \ldots, x_n)\). We may now modify the algorithm for behavioural mismatching without reordering as presented in Section 3.2. The new algorithm takes as input the \(Id\) indexed set of components LTSs \(L_i\) of the system and a mapping which is a regex \(R\) (for the set of LTSs). We just have to replace step 1 in this algorithm by: 1. compute the LTS \(L_R\) for the regex \(R\) 2. compute the product \(P_R = (A_{P_R}, S_{P_R}, I_{P_R}, F_{P_R}, T_{P_R}) = \Pi(L_R, L_i)\) 3. compute \(P = \text{proj}(P_R)\) Its complexity is \(O(|S|^{n+1})\) where \(S\) is the largest set of states. \(^2\) Note that our new algorithms would apply to the vector mappings we have defined in the previous section, just taking the set \(V = \{v_i\}\) of vectors as the regex \(v_1 + v_2 + \ldots + v_n\). 4.3 Behavioural Adaptation with Reordering Our algorithm for behavioural adaptation with reordering can also be adapted to deal with regex. 1. compute the LTS \( L_R = (A_R, S_R, I_R, F_R, T_R) \) for the regex \( R \). 2. build the Petri net encoding for the problem as presented in section 3.3, replacing part 6 with: - for each state \( s_R \) in \( S_R \), add a place \( \text{Control}_R \cdot s_R \) - put a token in place \( \text{Control}_R \cdot I_R \) - for each transition \( t_R = (s_R, (l_1, \ldots, l_n), s'_R) \) in \( T_R \): - add a transition with label \( \tau \), one arc from place \( \text{Control}_R \cdot s_R \) to the transition and one arc from the transition to place \( \text{Control}_R \cdot s'_R \) - for each \( l_i \) which has the form \( a! \), add one arc from place \( \text{Rec}_a \) to the transition - for each \( l_i \) which has the form \( a? \), add one arc from the transition to place \( \text{Em}_a \) 3. in the building of accept transitions, add \( F_R \) to the \( F_i \) taken into account (final states now correspond to acceptance states of the regex LTS). The rest of the algorithm (computing marking or coverability graph, and reducing them) is the same. Similarly to Section 3.3, this algorithm is exponential. 4.4 Application We here develop Example 4 above, following our behavioural adaptation technique. Example 6 (Example 4 developed). First note that, as explained before, we rename arguments to avoid name clash, we have: \[ \begin{align*} C[i] &= c: \text{end}[f] + c: \text{req}.c: \text{arg}.c: \text{ack.} . c, \\ S[i,f] &= s: \text{value}. s: \text{query}. s: \text{service}. s, \text{and} \\ A[i,f] &= a: \text{value}. a: \text{query}. a: \text{service}. A. \end{align*} \] To express that the client alternatively uses the two servers we may use the following regex: \( R_1 = (v_{s1} \cdot v_{s3} \cdot v_{a1} \cdot v_{a2} \cdot v_{a3}) \cdot \text{v end} \) with: \[ \begin{align*} v_{s1} &= <c: \text{req}, s: \text{query}, \epsilon>, & v_{a1} &= <c: \text{req}, \epsilon, a: \text{query}>, \\ v_{s2} &= <c: \text{arg}, s: \text{value}, \epsilon>, & v_{a2} &= <c: \text{arg}, \epsilon, a: \text{value}>, \\ v_{s3} &= <c: \text{ack}, s: \text{service}, \epsilon>, & v_{a3} &= <c: \text{ack}, \epsilon, a: \text{service}>, \\ v_{\text{end}} &= <c: \text{end}, \epsilon, \epsilon> \end{align*} \] Note that this mapping is probably overspecified, since it imposes a strict alternation between servers. Instead, one may choose to authorize the client to access any server it wants. Then, the mapping becomes: \[ R_2 = (v_{s1} \cdot v_{s2} \cdot v_{s3} + v_{a1} \cdot v_{a2} \cdot v_{a3}) \cdot \text{v end} \] We have run both examples and obtained (after reduction) the adaptors in Fig. 6 (left for \( R_1 \), and right for \( R_2 \)). Note that applying step 2 of the algorithm presented in Section 3.2, the state 1 and the corresponding transition are removed for \( R_1 \). Both adaptors solve the existing mismatch, making the system deadlock-free. Fig. 6. Adaptors obtained for the alternating client/server system 5 Related Work For a thorough review of the state of the art in Software Adaptation, we refer to [8]. Here, we will mention only a few works, those more closely related to our proposal. As said in the introduction, the need for adaptation may occur at any of the levels of interoperability described, while currently available component platforms address software adaptation only at the signature level. Hence, most of the recent proposals for adaptation of software have jumped from the signature level to the specification and analysis of behavioural interfaces, promoting the use of BIDLs for describing component protocols. The foundation for behavioural adaptation was set by Yellin and Strom. In their seminal paper [21], they introduced formally the notion of accessor as a software entity capable of enabling the interoperation of two components with mismatching behaviour. They used finite state machines to specify component interactive behaviour, to define a relation of compatibility, and to address the task of (semi-)automatic adaptor generation. More recently, in [18], the authors present an adaptation approach as a solution to particular synchronization problems between concurrent components, for instance one component uses or is accessed by two other components. This approach is based on algorithms close to the synchronous products we use in this paper. Moreover, they can solve protocol incompatibilities enabling one of the involved component to perform several actions before or after several synchronizations with its partners. In comparison, our proposal is more general and based on a rich notation to deal with possibly complex adaptation scenarios, whereas their approach works out only precise situations in which mismatch may happen, without using any mapping language for accessor specification. Taking Yellin and Strom’s proposal [21] as a starting point, the work of Brogi and collaborators (BCCP) [4, 5] presents a methodology for behavioural adaptation. In their proposal, component behaviour is specified using a process algebra—a subset of the \( \pi \)-calculus—, where service offering/invocation is represented by input/output actions in the calculus, respectively. The starting point of the adaptation process is a mapping that states correspondences between services of the components being adapted, this mapping can be considered as an abstract specification of the required adaptor. Then, an adaptor generation algorithm refines the specification given by the mapping into a concrete adaptor implementation, taking also into account the behavioural interfaces of the components, which ensures correct interaction between them according to the mapping. The adaptor is able to accommodate not only syntactical mismatch between service names, but also the interaction protocols that the components follow (i.e., the partial ordering in which services are offered/invoked). Another interesting proposal in this field is that of Inverardi and Tivoli (IT) [11]. Starting from the specification with MSCs of the components to be assembled and of the properties that the resulting system should verify (liveness and safety properties expressed as specific processes), they automatically derive the adaptor glue code for the set of components in order to obtain a property-satisfying system. The IT proposal has been extended in [12] with the use of temporal logic; coordination policies are expressed as LTL properties, and then translated into Büchi automata. Our approach addresses system-wide adaptation (i.e., differently from BBCP, it may involve more than two components). It is based on LTS descriptions of component behaviour, instead of process algebra as in BBCP. However, we may also describe behaviours by means of a simple process algebra, and use its operational semantics to derive LTSs from it. Differently from IT, we use synchronous vectors for adaptor specification, playing a similar function than the mappings rules in BBCP. With that, we are able to perform adaptation of incompatible events. With respect to behavioural adaptation, our approach can be considered as both generative and restrictive [8], since we address behavioural adaptation by enabling message reordering (as in BBCP), while we also remove incorrect behaviour (as in IT). Similarly to both approaches, our main goal is to ensure deadlock freedom. However, more complex adaptation policies and properties can be specified by means of regular expressions. Indeed, the most relevant achievement of our proposal is this use of regular expressions for imposing additional properties over mappings. In fact, the semantics of BBCP mappings can be expressed by combining their different rules (in our case, vectors) in a regular expression by means of the choice (+) operator. On the contrary, our regex are much more expressive, solving the problem of BBCP underspecified mappings [4], and allowing to take into account a new class of adaptation problems. In Table 1 we give a synthesis of the features of our approach compared to IT and BBCP. 6 Conclusion Software Adaptation has become a crucial issue for the development of a real market of components enhancing software reuse, especially when dealing with legacy systems. Recent research work in this field — in particular that of BBCP and IT [4,5,11,12]— has addressed several problems related to signature and behavioural mismatch. In this paper, we have shown our proposal for software adaptation based on a notation, namely regular expressions of synchronous vectors, and equipped with algorithms and tools. It builds on BBCP and IT previous works, overcoming some of their limitations, and making a significant advance to solve some of the pending issues. There are still some open issues in our proposal, deserving future work. First, and differently from BBCP, we do not deal with data types, nor with one-to-many correspondences between services. Taking data into account would require more expressive models than LTSs, such as Symbolic Transition Systems (STSs) [14]. This is a perspective for our work, since it allows the description of the data involved in the operations within the protocol without suffering from the state explosion problem that usually occurs in process algebraic approaches. With respect to one-to-many correspondences between services (one of the strong points in favour of the BBCP proposal), we intend to explore how regular expressions can be used for that purpose. More expressive models for mappings, such as non-regular protocols [19], could also be extended to vectors in order to get a bigger class of properties expressible at the adaptor level (e.g., load-balancing adaptation of the access of clients to servers). Finally, we intend to implement our adaptation algorithms in ETS, an Eclipse plug-in that we have developed for the experimentation over LTS and STS. Acknowledgements. The authors thank Bernard Berthomieu, Frédéric Lang, and Massimo Tivoli for their interesting comments and fruitful discussions. References
{"Source-Url": "http://dl.ifip.org/db/conf/fmoods/fmoods2006/CanalPS06.pdf", "len_cl100k_base": 9804, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 87446, "total-output-tokens": 11964, "length": "2e13", "weborganizer": {"__label__adult": 0.0002474784851074219, "__label__art_design": 0.0003762245178222656, "__label__crime_law": 0.0002677440643310547, "__label__education_jobs": 0.0004804134368896485, "__label__entertainment": 4.941225051879883e-05, "__label__fashion_beauty": 0.00011110305786132812, "__label__finance_business": 0.00014340877532958984, "__label__food_dining": 0.0002541542053222656, "__label__games": 0.00037980079650878906, "__label__hardware": 0.0005040168762207031, "__label__health": 0.00029969215393066406, "__label__history": 0.0001595020294189453, "__label__home_hobbies": 6.181001663208008e-05, "__label__industrial": 0.00023674964904785156, "__label__literature": 0.00016939640045166016, "__label__politics": 0.0001951456069946289, "__label__religion": 0.0003044605255126953, "__label__science_tech": 0.008209228515625, "__label__social_life": 6.22868537902832e-05, "__label__software": 0.005611419677734375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00020802021026611328, "__label__transportation": 0.00029659271240234375, "__label__travel": 0.000164031982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40252, 0.0207]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40252, 0.41861]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40252, 0.85483]], "google_gemma-3-12b-it_contains_pii": [[0, 2258, false], [2258, 4608, null], [4608, 7557, null], [7557, 10329, null], [10329, 12861, null], [12861, 16240, null], [16240, 19587, null], [19587, 21427, null], [21427, 23471, null], [23471, 26531, null], [26531, 29588, null], [29588, 31976, null], [31976, 34901, null], [34901, 37173, null], [37173, 40252, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2258, true], [2258, 4608, null], [4608, 7557, null], [7557, 10329, null], [10329, 12861, null], [12861, 16240, null], [16240, 19587, null], [19587, 21427, null], [21427, 23471, null], [23471, 26531, null], [26531, 29588, null], [29588, 31976, null], [31976, 34901, null], [34901, 37173, null], [37173, 40252, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40252, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40252, null]], "pdf_page_numbers": [[0, 2258, 1], [2258, 4608, 2], [4608, 7557, 3], [7557, 10329, 4], [10329, 12861, 5], [12861, 16240, 6], [16240, 19587, 7], [19587, 21427, 8], [21427, 23471, 9], [23471, 26531, 10], [26531, 29588, 11], [29588, 31976, 12], [31976, 34901, 13], [34901, 37173, 14], [37173, 40252, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40252, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
099401e57811e736e6803c49485c64015a11648e
Solaris Jumpstart Basics Hal Pomeranz Deer Run Associates All material Copyright © Hal Pomeranz and Deer Run Associates, 2000-2001. All rights reserved. Hal Pomeranz * Founder/CEO * hal@deer-run.com Deer Run Associates * PO Box 20370 * Oakland, CA 94620-0370 +1 510-339-7740 * http://www.deer-run.com/ Wouldn't It Be Great If..? - Adding a new machine were as simple as setting up the hardware? - Machines automatically customized themselves to their environment? - Broken systems could be swapped out quickly and with low admin overhead? - You could upgrade your network (or do patch installs) simply by rebooting? If you run a network of more than a dozen or so Sun workstations, you're probably spending an inordinate amount of time installing systems, upgrading systems, applying patches, etc. It may seem like you don't even get done with one round of upgrades before you need to start thinking about the next one. You might be in a situation where all of your systems have slightly different configurations depending on when they were installed and who set them up. None of these situations is desirable. Wouldn't it be great if you could create a single system image for all of your machines and upgrade that image across your entire network just by rebooting? Well, you can… What is Jumpstart? - Mechanism for "one-button installs" from central server - Simultaneously supports multiple system configurations and OS versions - Extensible to allow automatic local customizations The Jumpstart mechanism was developed by Sun in order to simplify installations on large networks of mostly similar hardware. The basic idea is that system configuration information is stored on a central server. When new clients are added to the network, the client boots from the central configuration server and then runs an automated install program which partitions the client's disk(s), installs the operating system, and makes appropriate local configuration changes (setting network parameters, hostname, etc.). Creating the configuration server requires a fair amount of System Administration expertise, but adding clients can then be accomplished by completely untrained technicians—this is called "leveraging key employees". The Jumpstart process also allows the local administrator to create custom scripts that are run either before ("pre-install") or after ("post-install") the Jumpstart process (or both). This allows sites to create even more finely customized install routines for their particular site. More on pre- and post-install scripts in the last section of this talk. Note that Jumpstart works for both Sparc and Intel-based systems. However, Intel-based machines don't have the appropriate boot ROM code to do a network boot, so the administrator must create a "boot floppy" to be used during the initial install. A single Jumpstart server may support booting clients from multiple different hardware platforms and/or operating system revisions. For example, the author has a single Jumpstart server at home which is capable of booting machines on any OS release from 2.5.1 through Solaris 8. References for Further Reading Solaris *Advanced Installation Guide* (Chapters 6 through 11) Hal's jumpstart info page: www.deer-run.com/~hal/jumpstart/ Sun's primary reference for Jumpstart configuration is the *Solaris Advanced Installation Guide* which may be found on the Web at http://docs.sun.com/ab2/coll.214.7/ SPARCINSTALL/@Ab2PageView/6302 Note that the above URL has been broken in the middle for readability—feed it to your Web browser as a single long line. The *Advanced Installation Guide* spends about 125 pages talking about Jumpstart, while this presentation covers much of the same material in about 30 slides. Needless to say, some detail is lost. It's a good idea to print out the relevant chapters from the *Advanced Installation Guide* and keep them around as a reference. Electronic versions of this presentation, plus helpful scripts and other Jumpstart-related tools and information can be found at http://www.deer-run.com/~hal/jumpstart/ Hopefully this information will be updated on a regular basis, so you may want to check back periodically. Setting up a Jumpstart Server This section covers the "quick and dirty" procedure for getting your first Jumpstart server on the network. Preparing for individual client installs will be covered in the next section. When we talk about a "Jumpstart Server", we are really talking about three different processes. When a client is booting under Jumpstart, it first needs to contact a Boot Server so that the client can set its basic network parameters (via RARP and bootp) and download the network boot code (via TFTP). The client actually boots off a Solaris image stored on the Install Server— the boot image is mounted on the client via NFS. Once the client is booted, it has to look up configuration information from the Configuration Server. The client then proceeds to run the Jumpstart install program and load the OS from another directory on the Install Server. For the rest of this talk, we'll be assuming that we have a single machine which is to be the Boot Server, the Install Server, and the Configuration Server. However, on large networks it may be advisable to split these functions across multiple machines. For example, the bootp protocol is LAN-based and you generally need a Boot Server on each of your networks (though you can finesse this by using proper router configuration). If you are planning on Jumpstarting a large number of machines simultaneously, you may want to deploy multiple install servers because they can quickly become saturated if multiple clients are being built in parallel. Overview of Steps 1. Create install and configuration dirs 2. Copy OS media to install directory 3. Copy scripts to configuration directory 4. Create sysidcfg file in install dir 5. Create /tftpboot 6. Start system daemons In order to build your Jumpstart server, you will need a copy of the OS media for each of the OS platforms you wish to support. The install server will require about 500MB of space (750MB for Solaris 8) per OS image. Note that the Boot Server must run in.rarpd, rpc.bootparamd, and in.tftpd (via inetd). The Install and Configuration servers will be needing to share file systems via NFS with the client machines. Step 1: Create Install/Config Dirs # mkdir -m 755 /export/jumpstart /export/jump_5.8 # chown root:root /export/jumpstart /export/jump_5.8 # cat >>/etc/dfs/dfstab share -F nfs -o ro,anon=0 /export/jumpstart share -F nfs -o ro,anon=0 /export/jump_5.8 ^D # shareall First the administrator must create the install directories (one per supported OS version) and the configuration directory (only one of these no matter how many OS versions you plan to support). If you're planning to split the Configuration Server and the Install Server onto separate machines, then the install directories belong on the Install Server and the configuration directory lives on the Configuration Server. The directories may be located anywhere in the file system and may be given any name. However, the author recommends that install directories be named something like `jump_<osvers>` in order to make automatic scripting easier. This will save you lots of time when creating customized pre- and post-install scripts. For the rest of this talk, we will assume the naming conventions used above. Once the directories are created, they must be shared via NFS to the rest of the network. Note that the file systems are being shared read-only (that's the "ro" option above) but that all machines on the network are being given anonymous root access to the file systems (any machine which can access the Jumpstart server can mount these file systems and remotely read any file with root privilege). The administrator may choose to restrict root privilege to only the clients being installed (see the `root=` option on the `share_nfs` manual page), but managing access on a per-machine basis can be difficult across a large network. Step 2: Copy OS Media Install OS: ``` # mount -r -F hsfs /dev/dsk/c0t2d0s0 /mnt # cd /mnt/Solaris_8/Tools # ./setup_install_server /export/jump_5.8 ``` Solaris 8 has a second OS disk: ``` # cd / # eject cdrom # mount -r -F hsfs /dev/dsk/c0t2d0s0 /mnt # cd /mnt/Solaris_8/Tools # ./add_to_install_server /export/jump_5.8 ``` Once the install directory has been created, the OS must be read off CD-ROM and placed in the directory. If the volume manager is running on your Jumpstart server, then the OS media will be mounted automatically. Otherwise, the OS CD-ROM must be mounted manually using commands similar to those shown above (although the actual disk device corresponding to your CD-ROM drive may vary from system to system). Once the CD-ROM is mounted, navigate over to the Tools directory and run the setup_install_server script, specifying the install directory name you created and shared in the previous step. Note that this script takes an inordinate amount of time to run, so go get coffee while the install directory is being created. This process must be repeated for every OS version you want to support on your Jumpstart server. Solaris 8 (and probably future versions of Solaris) now comes on two CD-ROMs. After the first CD-ROM has been installed, mount the second CD-ROM and run the add_to_install_server script to complete the Solaris 8 install directory. Digression: Know Your Install Dir - Interesting stuff is in /export/jump_5.8/Solaris_8 - Misc/jumpstart_sample contains sample configs and scripts - Tools contains scripts for adding clients - Tools/Boot is boot image for clients - Product directory contains Solaris packages which will be installed Before we continue setting up our Jumpstart server, it's worthwhile to review the contents of the install directory that was created in the last step. Underneath your install directory will be a directory named Solaris_<vers> (e.g., Solaris_2.6 or Solaris_8). This directory is where all of the interesting components live. The Misc/jumpstart_sample directory contains a sample configuration directory which you can use to base your own client configurations on. The Sun documentation recommends just copying the entire contents of the jumpstart_sample directory to your configuration directory, but we are going to be more selective. The Tools subdirectory contains the add_install_client script which is necessary when adding client configuration information to a Boot Server (more on this later). Tools/Boot is a complete copy of the Solaris OS, and is the directory which the clients use for NFS booting when they are first being booted off the network. Note that this directory is the unpatched Solaris image off the CD-ROM, so you may want to patch this version of the OS for security (use patchadd -C <dir> to patch the install directory image). The Product directory contains all of the Solaris OS packages from the CD-ROM (thus, the install directory really contains two distinct copies of the Solaris OS)— these are the packages which will be installed onto the client machine by the Jumpstart process. You could add your own local packages to the Product directory if desired (more on how to specify installed packages in the next section). Digression (cont.) - **Patches** subdirectory contains patches to install during jumpstart - An **MU** directory may also exist which contains maintenance update patches - Patches are installed in order based on when they were added to directory - This is almost never what you want... The install directory also contains a **Patches** subdirectory—patches stored here will be automatically installed by the Jumpstart process. However, the Jumpstart process doesn't use the same `patch_order` file functionality that the Sun Recommended Patch Clusters use to ensure patches get installed in the proper dependency order. Instead, the Jumpstart process just installs patches based on the timestamp on the patch subdirectory in the **Patches** directory (i.e., when the patch was added to the **Patches** directory). This is just terrible behavior because it means that patches are often installed in the incorrect order, which can actually cause the patching process to abort. Note that the install directory may (or may not) contain an **MU** directory. **MU** stands for "Maintenance Update", and various releases of an operating system may include Maintenance Updates which either add support for new hardware, add functionality, and/or fix bugs in the original release (generally called the "First Customer Ship" or "FCS release"). The **MU** directory contains lots of files and some READMEs about the contents of the update, but ultimately the update is really just another collection of Sun patches which will be installed by the Jumpstart process out of one of the subdirectories of the **MU** directory. Patches vs. Jumpstart - Lack of `patch_order` file really hurts jumpstart package install functionality - Installing lots of patches slows down jumpstart significantly **Recommendation:** - Remove contents of `Patches` directory (and totally remove MU directory, if any) - Install Sun recommended patch cluster as part of local post-install process If you've ever installed the Sun Recommended Patch Cluster on a machine, you know that it can take longer to install patches than to install the basic OS. It's also probably the case that you will be installing the Sun Recommended Patch Cluster as part of your local custom post-install process, because the Recommended Patch Cluster is a superset of the patches that come off the OS CD-ROM. Aside from performance issues, the fact that the Jumpstart process doesn't obey any sort of `patch_order` file, makes using Jumpstart to install patches not the way to go. So, once the install directory has been created, simply go ahead and remove all patches from the `Patches` directory in the install area. It’s your call whether or not to keep the Maintenance Update directory. On the one hand, the update may add useful functionality or fix critical bugs for your platform (on the other hand, the update may have no impact at all on your platform). However, installing the update will make the Jumpstart take longer on each client, even if the update doesn't apply to that client platform. Look at the documentation which comes with the update and decide for yourself whether or not you want to install it. Exception to the Recommendation - There's a bug in the Solaris 7 autoconf routines from CD-ROM - As a fix, patch 106978 must be installed by the Jumpstart - Make sure a recent version of this patch exists in your Patches directory If you are creating a Solaris 7 install directory, however, it is critical that the Patches directory contains a recent copy of Sun Patch ID 106978. This patch fixes bugs in the auto-configuration routines which are required for the client machines to boot fully unattended. Step 3: Copy Scripts to Config Dir 🔹 We definitely need the `check` script: ```bash # cd /export/jump_5.8/Solaris_8 # cd Misc/jumpstart_sample # mkdir -p -m 755 /export/jumpstart/bin # cp check /export/jumpstart/bin # chmod 755 /export/jumpstart/bin/check # chown -R root:root /export/jumpstart/bin ``` 🔹 You may want to look at sample configuration files in this directory... With the install directory created and properly configured, we now want to set up our configuration directory. We need a copy of the `check` script from the `Misc/jumpstart_sample` directory. Note that if your Jumpstart server supports multiple OS revisions, make sure to use the `check` script from the latest supported OS release. Thus, if your system boots both Solaris 7 and Solaris 8 clients, grab the `Solaris_8/Misc/jumpstart_sample/check` script. The `jumpstart_sample` directory also contains some sample client configuration files. It may be useful to review these sample files after hearing the information in the next section of this talk. Step 4: The sysidcfg File ``` system_locale=en_US timezone=US/Pacific timeserver=localhost terminal=xterms network_interface=PRIMARY \\ {netmask=255.255.255.0 protocol_ipv6=no} name_service=DNS \\ {domain_name=deer-run.com name_server=192.168.1.2} security_policy=NONE root_password=papAq5PwY/QQM ``` The information in the sysidcfg file is used by clients to set various system parameters during the Jumpstart process and when the client reboots for the first time. The format of this file is (slightly) OS version-dependent but shouldn't vary from client to client, so it's probably easiest to locate the file at the top of the install directory (/export/jump_5.8/sysidcfg in our example). It is important to note that the sysidcfg file contains a copy of the client's encrypted root password entry for /etc/shadow, so the file should certainly be mode 400 and owned by root. Recall, however, that the install directory was exported with anonymous root access, so anybody on another machine could mount the install directory and read the file. Make sure that the client systems have a different root password from all of your other machines! Note also that a copy of the sysidcfg file is retained in the client's /etc directory— you probably want to delete this file after the first reboot. Other parameters in the file include the system's default locale and time zone (consult the Advanced Installation Guide for more info). The netmask of the primary network interface can be specified as well as name service parameters (NIS is supported as well— see the Advanced Installation Guide). Note that the protocol_ipv6 and security_policy options are supported only under Solaris 8— delete these options and the above file is appropriate for Solaris 7 (see next slide for note on Solaris 2.6). The security_policy option is not documented in the manual pages or the Advanced Installation Guide— see instead Solaris 5.6 `sysidcfg` File timezone=US/Pacific timeserver=localhost terminal=xterms network_interface=hme0 {netmask=255.255.255.0} name_service=NONE root_password=papAq5PwY/QQM The Solaris 2.6 `sysidcfg` file is considerably more primitive than the Solaris 7 and 8 versions (`sysidcfg` is not supported prior to Solaris 2.6, meaning Jumpstarts for Solaris 2.5.1 and earlier require some administrator intervention during the boot process). In particular, note that DNS is not supported for the `name_service` parameter—you will have to manually configure DNS after the system boots or during the post-install phase. Also note that the primary interface for the system must be explicitly specified. This means you can't use the same `sysidcfg` file for systems that have le0 interfaces (older microSparc-based machines). Step 5: Create /tftpboot ◆ Create the directory # mkdir -m 711 /tftpboot # chown root:root /tftpboot ◆ You'll also need to uncomment the right line in /etc/inet/inetd.conf The Boot Server must have a /tftpboot directory. Aside from being the location for the network boot code for the client machines, the presence of the /tftpboot directory is what triggers the Boot Server to start the in.rarpd and rpc.bootparamd processes at boot time. However, in order for the system to service TFTP requests, the administrator must also uncomment the appropriate line in /etc/inet/inetd.conf and send a HUP signal to the running inetd process (or reboot the system). The line you're looking for in inetd.conf is #tftp ... /usr/sbin/in.tftpd in.tftpd -s /tftpboot Step 6: Start System Daemons ◆ A bunch of daemons to (re)start: - NFS daemons - `in.rarpd` - `rpc.bootparamd` - `inetd` ◆ Rebooting the jumpstart server is probably easiest... Once all directories are created and configured, the administrator needs to make sure that all of the appropriate daemons are running. The Installation and Configuration Servers require that the NFS server processes (`mountd`, `nfsd`, `statd`, and `lockd`) all be running. The Boot Server must be running `in.rarpd` and `rpc.bootparamd` and have TFTP properly configured in `inetd.conf` and have the `inetd` process running. Frankly, assuming that our Jumpstart server has no other particular duties, the easiest thing is to just reboot the system. Assuming all of the directories and configuration files are in good order, all of the necessary daemons should be started automatically by the boot process. Adding Clients With the server properly configured, it's time to turn our attention to creating individual client configurations and updating our Jumpstart server to allow particular clients to boot. Steps for Adding a Client 1. Create client profile 2. Create pre- and post-install scripts 3. Update /export/jumpstart/rules 4. Run check script 5. Add ethers and hosts information 6. Run add_install_client script 7. Reboot client machine The administrator must complete the seven steps listed above before a client can be successfully Jumpstarted. However, the first four steps generally do not need to be performed for every client— the administrator can usually create a small number of client profiles and pre- and post-install scripts which will suffice for a large number of machines. More on this in the next few slides. ### Step 1: The Client Profile <table> <thead> <tr> <th>install_type</th> <th>initial_install</th> </tr> </thead> <tbody> <tr> <td>system_type</td> <td>standalone</td> </tr> <tr> <td>cluster</td> <td>SUNWCprog</td> </tr> <tr> <td>package</td> <td>SUNWaccr</td> </tr> <tr> <td>package</td> <td>SUNWaccu</td> </tr> <tr> <td>partitioning</td> <td>explicit</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s0</td> </tr> <tr> <td></td> <td>512</td> </tr> <tr> <td></td> <td>/</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s1</td> </tr> <tr> <td></td> <td>2048</td> </tr> <tr> <td></td> <td>/var</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s2</td> </tr> <tr> <td></td> <td>all</td> </tr> <tr> <td></td> <td>overlap</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s3</td> </tr> <tr> <td></td> <td>2048</td> </tr> <tr> <td></td> <td>swap</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s4</td> </tr> <tr> <td></td> <td>1024</td> </tr> <tr> <td></td> <td>/usr</td> </tr> <tr> <td>filesystem</td> <td>c0t3d0s5</td> </tr> <tr> <td></td> <td>free</td> </tr> <tr> <td></td> <td>/local</td> </tr> </tbody> </table> The client profile file is used to describe how individual machines should be configured. Generally speaking, the client profile describes how the system's disk(s) should be partitioned and which OS software packages should be loaded on the machine—machines with similar disk partitioning and OS configurations can use the same profile file (even if those machines are running different OS revisions). Each profile file must begin with the `install_type` directive—`initial_install` means blow away everything on the disks and start from scratch, but `upgrade` is another possibility (see the Advanced Installation Guide for more info). Various `system_type` choices exist—`standalone` means a machine with a full OS install on the system's local disks (probably the most common configuration in these days of large disk drives). Next the administrator specifies which OS cluster should be installed. Cluster choices are SUNWCreq (aka the Core System Support cluster), SUNWCuser (End-User cluster), SUNWCprog (Developer cluster), and SUNWCall (Every OS package). Packages may then be added or deleted from the cluster by using `package` directives. Administrators may specify the exact disk partitioning using `partitioning explicit` (as opposed to having the Jumpstart do an automatic partitioning which is usually sub-optimal). Partition sizes are in megabytes. Note that the size of the last partition is listed as `free` which means that this partition consumes any remaining unallocated space. When configured carefully, the same partition table can work even on disks of unequal sizes! Step 2: Pre- and Post-Install Scripts - Always strictly optional - Careful! New system's disks are mounted on /a during jumpstart - Script output automatically saved to /a/var/sadm/system/logs - More on all of this in a later section... The administrator may optionally create pre- and post-install scripts. The pre-install script runs before the system profile file is read and executed (i.e., before the system's disk drives are repartitioned and the new OS image loaded). This means that pre-install scripts are an excellent place to back up various files from the original system (e.g., configuration files under /etc, log files, SSH host keys, etc). Note that the pre-install script will have to explicitly mount (and unmount) the file systems from the system's local drives, because the local file systems won't be mounted at the time the pre-install script runs. By the time the post-install script runs, the new file local file systems will be created and the OS will have been loaded. Note, however, that the new file system created on the system's local drives will be mounted with the local root file system at /a, so make sure the post-install script follows the proper indirection. Post-install script are a good place to do local system customization and restore files that were backed up by the pre-install script. The output of the pre- and post-install scripts can be found in the /var/sadm/system/{begin, finish}.log files on the new system once the Jumpstart is completed and the new system has rebooted. Step 3: The rules File Format of entries are: - **<match rule>** - **<pre-inst>** - **<profile>** - **<post-inst>** Sample File: ```nim hostname srvr1.deer-run.com \ - srvr1.prof bin/make-serv.sh network 192.168.10.0 - eng.prof bin/enghost.sh network 192.168.128.0 & karch sun4m \ - old-sup.prof bin/sup-tools.sh network 192.168.128.0 \ - sup.prof bin/sup-tools.sh any - generic.prof bin/do-patch.sh ``` The purpose of the rules file is to associate profile file and pre- and post-install scripts with a particular machine or group of machines. Entries are searched in order until the match criteria in the first column fits the client being booted; that rule is then executed ("first match and exit" behavior). Each rule is a single line, but lines may be continued using "\" as shown above. Pre- and post-install scripts may be omitted by putting a "-" in the appropriate column (actually, as we'll see in the next section, even the profile can be omitted in some cases). Comments are allowed if prefixed with "#". Script names and profile file names are relative to the top of the Jumpstart configuration directory. Match criteria cover a wide variety of different system parameters, and not just the simple criteria shown above. For a complete list, see the Advanced Installation Guide. Note that logical operations (and, or, not) are supported. The first line above shows an example of a rule for a particular machine. Generally, however, rules apply to a group of machines (a network or particular hardware type which should all be configured identically) as we see in later rules. The third and fourth lines above take advantage of the "first match and exit" behavior to configure older microSparc machines using one profile and newer (probably UltraSparc) machines using another. However, both classes of machines use the same post-install script. The last line is a catch-all or default entry for machines which don't match any of the previous rules. It may be dangerous to allow any random machine which connects to your network to Jumpstart from your server– you may not wish to include a default rule in your file. Step 4: Run check Script - Script should be run each time the rules file is updated - Script checks the syntax of profiles and verifies that scripts exist - As a side-effect, creates the rules.ok file for jumpstart process The check script that we copied from the jumpstart_sample directory is used to validate and pre-process the rules file. The check script verifies the syntax of the profile files that are listed in all rules, and checks that the listed pre- and post-install scripts exist (but doesn't check script syntax). More importantly perhaps, the check script creates the rules.ok file which is the file that is actually consulted during the Jumpstart (the rules file itself is only used by the administrator and the check script). Again, if your Jumpstart server is providing configurations for several different OS revisions, make sure to use the check script from the most recent Solaris version (some profile entries in newer versions of Solaris are not backwards compatible with older check scripts). Step 5: Update Host Info - `/etc/ethers` should contain the MAC address and FQDN of the host - Also make sure the jumpstart server can resolve the name of the host - If you're using hosts files or NIS/NIS+, list the FQDN first Each time a new host is added to the Jumpstart network, the Boot Server needs to be updated. The ethernet (MAC) address and hostname of the machine need to be added to the Boot Server's `/etc/ethers` file. The machine's ethernet address is displayed in the Sun banner when the system boots, and is also available on running systems by running `ifconfig` (as root) and/or from the packing slip which comes with each new machine. The Boot Server also needs to be able to resolve the machine's IP address, either from its own `hosts` file or from NIS/NIS+ or DNS, depending on how the Boot Server is configured. This will mean updates on either the Boot Server machine itself or on your name server. Generally, it's good policy to use the fully qualified domain name (FQDN) form for all entries in the `/etc/ethers`, `/etc/inet/hosts`, and even in the `rules` file in the Jumpstart configuration directory (for `/etc/inet/hosts` list the FQDN first followed by the unqualified form). Being consistent throughout will save a lot of headaches down the road. Step 6: add_install_client ``` # cd /export/jump_5.8/Solaris_8/Tools # ./add_install_client \ -c jumpsrvr:/export/jumpstart \ -p jumpsrvr:/export/jump_5.8 \ -s jumpsrvr:/export/jump_5.8 \ sun01.deer-run.com sun4u ``` Once all of the client information has been updated on the Boot Server, the administrator needs to run the add_install_client script. This script is found in the Tools directory in the appropriate install directory for the version of Solaris that you want the client to run. Make sure you use the correct add_install_client script! The add_install_client script is responsible for placing the appropriate boot code and symbolic links in /tftpboot to allow the client machine to boot. add_install_client also updates the /etc/bootparams file used by rpc.bootparamd. The \(-c\) flag specifies the server name and path for the Jumpstart configuration directory (the server name here would be your Configuration Server). The \(-s\) flag specifies the Install Server and pathname to the install directory. \(-p\) is the location of the sysidcfg file (don't use this option for Solaris releases prior to 2.6), which we've stored at the top of our install directory. You also need to specify the name of the machine (again use the FQDN here) and the kernel architecture of the client (this is the output `uname -m` on the client host). A Better Way - Too much typing! - Lots of redundant information - OS version dependent - How about this instead: ``` # cd /export/jumpstart/bin # ./add_client sun01.deer-run.com sun4u 5.8 ``` Frankly, the add_install_client requires way too much (redundant) typing, and forces the administrator to get to the correct install directory to run the right version of the script. You'll find a simpler add_client script at the http://www.deer-run.com/~hal/jumpstart/ site. Once you've downloaded the script, edit the file and make sure the CONF_SERVER, CONF_DIR, INST_SERVER, and INST_ROOT variables are set appropriately for your server. Note that the add_client script assumes that the install directories are $INST_ROOT/jump_<osvers> (i.e., the conventions we've been using in this talk). Step 7: Reboot Client ok boot net - install Once the Boot Server setup is completed, boot the client system as shown above. Note that the boot line is "boot <space> net <space> - <space> install". The most common error is to type "-install" as a single final argument, but then the Jumpstart won't proceed. With the basic Jumpstart configuration procedure out of the way, it's time to look at some tips and tricks for writing pre- and post-install scripts. We'll also discuss why and how to bypass the "normal" Jumpstart installation procedure in order to make system installs more efficient. Testing Pre-/Post-Install Scripts - Things don't always work as expected in the Jumpstart environment - `boot net` (with no additional args) brings up interactive install - Exit the `suninstall` program and you can do your script testing One of the problems with writing pre- and post-install scripts is that it can be difficult to simulate the Jumpstart environment for testing purposes. The good news is that you don't have to. The trick is to configure the Jumpstart server as if you were preparing to boot a new Jumpstart client (set up `/etc/ethers` and `/etc/inet/hosts`, run the `add_client` script, etc.). However, when you boot the client just use "`boot net`" without the "`-install`" flags. This will cause the client to boot over the network and start the interactive `suninstall` program. You may, however, quit out of this program on the first screen (hit `<F5>` and then `<F2>`) and end up at a shell prompt in the Jumpstart environment. You can then mount your pre- and post-install scripts via NFS from the Configuration Server and test to your heart's contentment. Sometimes the most appropriate time to run a post-install script is not during the Jumpstart process at all, but rather immediately after the client system boots for the first time. This slide shows a post-install script whose only job is to create a boot script on the client system which will be triggered when the client boots for the first time. This generated script actually installs the Sun Recommended Patch Cluster which it obtains over the network via NFS from a central server. Installing the patch cluster in the Jumpstart environment is more difficult because all of the client's local file systems are mounted on `/a`. It may be difficult to read the code above, but the generated script ends up in `/etc/rc2.d/S74Patch` and reads as follows: ```bash #!/sbin/sh mount 192.168.1.1:/export/patches /mnt cd /mnt/`uname -r` ./install_cluster -q -nosave rm -f /etc/rc2.d/S74Patch reboot -- -r ``` Note that we're using the IP address of the patch server since we can't be guaranteed that name service is working properly at this point. Also note that the script removes itself before it calls reboot. Nothing wrong with this behavior--the script won't actually be removed completely until all processes which have the file open are terminated. The Bad News - Jumpstart installs software in a very inefficient manner - Patches can also take a long time to install, depending on time since FCS - Full install + recommended patch cluster can take 1.5 hours The truly unfortunate aspect of the Jumpstart install process is that each OS package is added onto the system one at a time via the `pkgadd` process. There’s a lot of overhead to `pkgadd`, not to mention the fact that the file system containing the packages has to be read via NFS from the central Install Server host. Then you’re probably going to want to install at least the Recommended Patch Cluster. Total install time can be as much as 90 minutes, which may not seem like a long time unless (a) you’ve got 500 machines to build in one evening, or (b) you’re trying to replace a user’s desktop so that they can get back to work. However, Virginia, there is a Santa Claus… The Good News * You don't have to use the standard jumpstart install process * Custom pre- and post-install scripts can be used instead Sample rules file entry: ``` network 192.168.128.0 \\n bin/clone $ bin/clone-postinst ``` The good news is that if you're clever at writing your pre- and post-install scripts then you can completely bypass the normal Jumpstart install process. If the administrator does not specify a profile file in the rules entry for a given machine, then Jumpstart expects that the pre- and post-install scripts are completely responsible for installing the OS on the local client machine. Frankly, the really difficult part about installing systems using custom pre- and post-install scripts is partitioning the local disk properly, though there are some other cute hacks that need to be reviewed as well. The rest of this section will focus on a specific example of a custom pre-install script which rapidly copies a default OS image onto a new client. The clone Script - Build one machine (by hand or with standard jumpstart) - Make a level 0 backup of all file systems (and copy of partition table) - clone script operates on "blank" host: - Copies partition table and builds file systems - Restores dumps from "standard" host - Tweaks hosts files to change identity The clone script operates by restoring another system's image onto a new machine's local disk drives via ufsrestore. The administrator needs to somehow create a "gold standard" version of a particular platform (either via a standard Jumpstart or manually). Once satisfied with the system configuration, the admin makes level 0 backups of that machine's file systems and copies the dump files (compressing or gziping the files is a good idea) to a central server. The clone script will simply mount the dump files from the central server and ufsrestore them onto the client's disks. Of course, the clone script has to first partition the client's local disk appropriately and create file systems in the new partitions (more on this coming up), so part of the prep work before running the clone script is copying the partition table from the "gold" machine to the central server where the dump files reside. Once the clone script restores the dump images onto the new machine, it needs to tweak half a dozen files so that the new system comes up with a different hostname and IP address from the "gold" machine. The clone script runs in less than half the time of the standard "package-by-package" Jumpstart install process (install speed is essentially limited only by your network bandwidth and disk speed on the target host). You can find copies of the clone script and related files at the usual http://www.deer-run.com/~hal/jumpstart/ site. Some Defaults to Get Started - **What's my host name?** ``` HOSTNAME=`uname -n` ``` - **What OS version is this?** ``` OSVERS=`uname -r` ``` - **What kind of machine am I?** ``` PLATFORM=`prtconf | awk '/^SUNW,/ { print }'` ``` The clone script needs to set a bunch of defaults before getting underway. Much information about the client machine can be derived from the `uname` command once the client has booted up in the Jumpstart environment. In particular, `uname -i` usually returns the system's hardware type—this is a string like `SUNW,Ultra-5_10`. However, on some non-Sun hardware `uname -i` is not always completely reliable. A more cumbersome (but also more portable) method is to pull this information out of `prtconf` as shown above. To get an idea of what's going on, it's helpful to look at the output of `prtconf`: ``` % prtconf System Configuration: Sun Microsystems sun4u Memory size: 384 Megabytes System Peripherals (Software Nodes): SUNW,Ultra-5_10 [...additional lines deleted ...] ``` The `awk` line simply matches the line which starts with "SUNW," and prints it. More Defaults What's my disk device? ``` PRIM_DISK=`ls /dev/rdsk | \ head -1 | sed 's/..$//'` ``` What kind of disk is it? ``` DISK_NAME=`format -d $PRIM_DISK \ -f $CROOT/lib/format.cmd | \ awk '/^</ { print $1 }' | sed 's/<//'` ``` On single-disk systems, determining the system's primary disk device is straightforward. Our script gets a listing of `/dev/rdsk` and simply snatches off the first entry—usually something like `c0t3d0s0`. If you want the disk device and not a disk slice, then you need to drop the last two characters (`c0t3d0`). On multi-disk systems, the disk which is sorted first by `ls` (usually the disk with the lowest SCSI target ID) is not guaranteed to be the system's boot disk, so proceed with caution! Finding out the manufacturer's name for this disk turns out to be tricky. The only place this information is available is from the `format` command: ``` # format -d c0t0d0 [... lines deleted ...] format> current Current Disk = c0t0d0 <ST39120A cyl 17660 alt 2 hd 16 sec 63> /pci@1f,0/pci@1,1/ide@3/dad@0,0 format> quit # ``` The string we're trying to get at is "ST39120A", but `format` likes to be run interactively rather than in a script. The work-around is to create a "command file" and feed it to `format` with the `-f` option. The command file contains the current and quit commands we would normally enter in an interactive session. We feed the output to `awk` to pull out the string we need. What They Didn't Teach You... ```bash echo "Writing partition table (VTOC) to disk:" if [ -f $CROOT/disks/$PART_FILE ]; then fmthard -s $CROOT/disks/$PART_FILE \ /dev/rdsk/$(PRIM_DISK)s2 else echo "No $CROOT/disks/$PART_FILE" exit 255 fi ``` If you've been administering Solaris machines for a long time, you may think that the way you write partition tables to drives is with the `format` command. However, as we discussed on the last slide, `format` is a pain to run from inside a non-interactive script. It turns out that Solaris also supplies the `fmthard` command for non-interactively writing partition tables (formally speaking, that's the disk's VTOC or *volume table of contents*) based on a data file. The data file format used by `fmthard` is tricky, but Solaris also supplies the `prtvtoc` command which can dump out the VTOC from an existing disk drive in the format used by `fmthard`. So, as far as the `clone` script goes, the administrator needs to partition the "gold" system or some other machine with the same type of disk drive as the target platform and then run `prtvtoc` to dump that partition table into a file. The partition file should then be stored on the same server that the dump images are kept on. The name for the partition file for the `clone` script is (by default) the manufacturer's disk name which we extracted via `format` on the previous slide. Note that disk geometry (cylinders, tracks, heads, etc.) varies widely from manufacturer to manufacturer and from disk to disk. You almost certainly can't use the same VTOC on a 9GB disk from two different manufacturers, so make sure you run the `prtvtoc` command on a system which has a matching disk as compared to your target machine. Note that Sun regularly changes disk drive vendors, so three Ultra5s bought at three different times may have three completely different disks. Building the Root File System ```bash echo "Building root file system:" newfs /dev/dsk/${PRIM_DISK}s0 </dev/null mount /dev/dsk/${PRIM_DISK}s0 /a cd /a zcat $CROOT/images/$SYS_IMAGE/root.dump.Z | \ ufsrestore -rf - rm -f restoresymtable echo "" ``` Once the VTOC has been written, the clone script needs to start building file systems with newfs and then doing the restores (you can't do a restore into a raw disk partition, so you need to newfs first). newfs will run interactively unless its input is not a tty, so we redirect its input to come from /dev/null. Technically we should run fsck on the file system between the newfs and the mount, but frankly it's never been a problem for your author. The clone script restores the root file system first so that it can get at the /etc/vfstab file and find out about the other file systems that need to be configured for the "standard" system image. Note that we are maintaining the Jumpstart convention of building the file systems on the local disk by rooting them on /a in the Jumpstart environment. The restoresymtable file is an artifact of the ufsrestore process and can be safely deleted. Reading the vfstabs File ``` set -- `awk '(!/^#/ && $4 == "ufs" && $3 != "/") | { printf("%s %s\n", $1, $3) }' \ /a/etc/vfstab` \ while [ $# -ge 2 ]; do DEV=$1 FS=$2 shift 2 done ``` Next we need to find all of the other UFS file systems which need to be created on the local client. The `awk` script looks at the `/a/etc/vfstab` file we just restored and pulls out all non-comment lines (`!/^#/`) which refer to UFS file systems (the fourth column of the `vfstabs` file is equal to "ufs") which are not the root file system (the third column not equal to "/") that was already restored. The `awk` script prints out the first (disk device) and third (mount point) columns of any matching lines and "set --" makes the output of the `awk` script the current argument list for the script (which means we can manipulate the output fields as $1, $2, etc. and with the `shift` operator). We then fire off a `while` loop which will pull off pairs of arguments from our new argument list and operate on them. The `while` loop continues until all arguments are exhausted. Isn't shell scripting fun? Building Each File System ```bash echo "Creating $FS on device $DEV" newfs $DEV </dev/null mount $DEV /a$FS FROOT=`echo $FS | sed 's/^///' | sed 's/\//\//-/g'` if [ -f $CROOT/images/$SYS_IMAGE/$FROOT.dump.Z ] then cd /a$FS zcat $CROOT/images/$SYS_IMAGE/$FROOT.dump.Z | ufsrestore -rf - rm -f restoreymtable fi echo "" ``` What we do inside of the while loop is essentially the same steps we used to restore the root file system earlier: run `newfs`, mount the file system under `/a`, and then use `ufsrestore` to pull back the "gold" image of the file system. Note that not all file systems will have dump files associated with them (you might choose to populate non-system directories like `/usr/local` or `/home` through some other mechanism). Clean Up Hosts Files Files where the old host name appears: /etc/nodename /etc/hostname.* /etc/inet/hosts /etc/net/*/hosts See notes for other potentially "interesting" files to change... A system's hostname appears in all of the files listed above (that's six total files because there are three hosts files in directories under /etc/net). The system's IP address appears in /etc/inet/hosts. The clone script needs to tweak all of these files so that the new machine boots up with a different identity from the "gold" image. As part of the post-install process, you may also want to think about modifying the /etc/defaultrouter, /etc/resolv.conf, /etc/inet/ntp.conf, /etc/ssh_host*_.key, and other similar files. Partial Code Listing ``` set -- `netstat -in | awk \ '/^[a-z]*e0/ { printf("%s %s\n", $1, $4) }'`` PRIM_INT=$1 IPADDR=$2 echo "127.0.0.1 localhost" > /a/etc/inet/hosts echo "$IPADDR $HOSTNAME loghost" \n >> /a/etc/inet/hosts rm -f /a/etc/hostname.* ``` echo $HOSTNAME >/a/etc/hostname.$PRIM_INT It's worthwhile to look at how the `clone` script goes about deducing and setting the new system's network parameters. The output of `netstat -in` looks like this: ``` # netstat -in Name Mtu Net/Dest Address ... lo0 8232 127.0.0.0 127.0.0.1 ... hme0 1500 10.66.0.0 10.66.2.6 ... ``` The `awk` script matches any lines where the interface name ends in "e0" and extracts the interface name (column 1) and the IP address (column 4). If there were more than one ethernet interface on the system, we might have a problem, but the Jumpstart generally only activates the system's primary network interface (the interface on the primary CPU board on the system). If the system `does` have multiple interfaces, you'll have to configure the extra devices as part of the post-install process (or manually when the system reboots). Wrap Up Time to ask any final questions and review the URLs where you can find additional information. Those URLs Again... - **Solaris Advanced Installation Guide** - **Hal's jumpstart info page:** [www.deer-run.com/~hal/jumpstart/](http://www.deer-run.com/~hal/jumpstart/) That URL for the *Advanced Installation Guide* again is: ``` http://docs.sun.com:80/ab2/coll.214.7/ SPARCINSTALL/@Ab2PageView/6302 ``` (that's a single long line as far as your browser is concerned).
{"Source-Url": "http://www.deer-run.com/~hal/jumpstart/Jumpstart.pdf", "len_cl100k_base": 11409, "olmocr-version": "0.1.49", "pdf-total-pages": 44, "total-fallback-pages": 0, "total-input-tokens": 76992, "total-output-tokens": 13260, "length": "2e13", "weborganizer": {"__label__adult": 0.0003948211669921875, "__label__art_design": 0.0007433891296386719, "__label__crime_law": 0.00018978118896484375, "__label__education_jobs": 0.002567291259765625, "__label__entertainment": 0.0002218484878540039, "__label__fashion_beauty": 0.00018465518951416016, "__label__finance_business": 0.0014448165893554688, "__label__food_dining": 0.00022411346435546875, "__label__games": 0.0024261474609375, "__label__hardware": 0.00827789306640625, "__label__health": 0.0002467632293701172, "__label__history": 0.00039887428283691406, "__label__home_hobbies": 0.0006670951843261719, "__label__industrial": 0.0010547637939453125, "__label__literature": 0.00035381317138671875, "__label__politics": 0.00022721290588378904, "__label__religion": 0.00048828125, "__label__science_tech": 0.07421875, "__label__social_life": 0.00019037723541259768, "__label__software": 0.2481689453125, "__label__software_dev": 0.65625, "__label__sports_fitness": 0.0002429485321044922, "__label__transportation": 0.0004324913024902344, "__label__travel": 0.0003075599670410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48618, 0.02403]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48618, 0.23785]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48618, 0.89272]], "google_gemma-3-12b-it_contains_pii": [[0, 305, false], [305, 1289, null], [1289, 3115, null], [3115, 4197, null], [4197, 4414, null], [4414, 5716, null], [5716, 6356, null], [6356, 8066, null], [8066, 9450, null], [9450, 11306, null], [11306, 12920, null], [12920, 14478, null], [14478, 14986, null], [14986, 16020, null], [16020, 18007, null], [18007, 18831, null], [18831, 19596, null], [19596, 20490, null], [20490, 20691, null], [20691, 21321, null], [21321, 23847, null], [23847, 25375, null], [25375, 27580, null], [27580, 28603, null], [28603, 29887, null], [29887, 31248, null], [31248, 32038, null], [32038, 32347, null], [32347, 32633, null], [32633, 33721, null], [33721, 34978, null], [34978, 35869, null], [35869, 36855, null], [36855, 38625, null], [38625, 39747, null], [39747, 41200, null], [41200, 43088, null], [43088, 44255, null], [44255, 45358, null], [45358, 46116, null], [46116, 46835, null], [46835, 47988, null], [47988, 48092, null], [48092, 48618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 305, true], [305, 1289, null], [1289, 3115, null], [3115, 4197, null], [4197, 4414, null], [4414, 5716, null], [5716, 6356, null], [6356, 8066, null], [8066, 9450, null], [9450, 11306, null], [11306, 12920, null], [12920, 14478, null], [14478, 14986, null], [14986, 16020, null], [16020, 18007, null], [18007, 18831, null], [18831, 19596, null], [19596, 20490, null], [20490, 20691, null], [20691, 21321, null], [21321, 23847, null], [23847, 25375, null], [25375, 27580, null], [27580, 28603, null], [28603, 29887, null], [29887, 31248, null], [31248, 32038, null], [32038, 32347, null], [32347, 32633, null], [32633, 33721, null], [33721, 34978, null], [34978, 35869, null], [35869, 36855, null], [36855, 38625, null], [38625, 39747, null], [39747, 41200, null], [41200, 43088, null], [43088, 44255, null], [44255, 45358, null], [45358, 46116, null], [46116, 46835, null], [46835, 47988, null], [47988, 48092, null], [48092, 48618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48618, null]], "pdf_page_numbers": [[0, 305, 1], [305, 1289, 2], [1289, 3115, 3], [3115, 4197, 4], [4197, 4414, 5], [4414, 5716, 6], [5716, 6356, 7], [6356, 8066, 8], [8066, 9450, 9], [9450, 11306, 10], [11306, 12920, 11], [12920, 14478, 12], [14478, 14986, 13], [14986, 16020, 14], [16020, 18007, 15], [18007, 18831, 16], [18831, 19596, 17], [19596, 20490, 18], [20490, 20691, 19], [20691, 21321, 20], [21321, 23847, 21], [23847, 25375, 22], [25375, 27580, 23], [27580, 28603, 24], [28603, 29887, 25], [29887, 31248, 26], [31248, 32038, 27], [32038, 32347, 28], [32347, 32633, 29], [32633, 33721, 30], [33721, 34978, 31], [34978, 35869, 32], [35869, 36855, 33], [36855, 38625, 34], [38625, 39747, 35], [39747, 41200, 36], [41200, 43088, 37], [43088, 44255, 38], [44255, 45358, 39], [45358, 46116, 40], [46116, 46835, 41], [46835, 47988, 42], [47988, 48092, 43], [48092, 48618, 44]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48618, 0.05308]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
4fbc1534eae163676fbb93b33bf532fe931cdac4
[REMOVED]
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/10579231/3496579.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 9107, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 39340, "total-output-tokens": 12307, "length": "2e13", "weborganizer": {"__label__adult": 0.0005397796630859375, "__label__art_design": 0.00045418739318847656, "__label__crime_law": 0.0007033348083496094, "__label__education_jobs": 0.0015287399291992188, "__label__entertainment": 0.00018346309661865232, "__label__fashion_beauty": 0.00030875205993652344, "__label__finance_business": 0.0003993511199951172, "__label__food_dining": 0.0007462501525878906, "__label__games": 0.0016012191772460938, "__label__hardware": 0.0020923614501953125, "__label__health": 0.005641937255859375, "__label__history": 0.0004878044128417969, "__label__home_hobbies": 0.0002243518829345703, "__label__industrial": 0.0007429122924804688, "__label__literature": 0.0003862380981445313, "__label__politics": 0.00042128562927246094, "__label__religion": 0.0006356239318847656, "__label__science_tech": 0.37744140625, "__label__social_life": 0.0002582073211669922, "__label__software": 0.0416259765625, "__label__software_dev": 0.56201171875, "__label__sports_fitness": 0.0005803108215332031, "__label__transportation": 0.0005078315734863281, "__label__travel": 0.00027489662170410156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51573, 0.05747]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51573, 0.74564]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51573, 0.86229]], "google_gemma-3-12b-it_contains_pii": [[0, 811, false], [811, 4558, null], [4558, 10369, null], [10369, 14926, null], [14926, 19962, null], [19962, 24192, null], [24192, 28871, null], [28871, 31231, null], [31231, 36512, null], [36512, 38611, null], [38611, 40238, null], [40238, 45344, null], [45344, 51573, null]], "google_gemma-3-12b-it_is_public_document": [[0, 811, true], [811, 4558, null], [4558, 10369, null], [10369, 14926, null], [14926, 19962, null], [19962, 24192, null], [24192, 28871, null], [28871, 31231, null], [31231, 36512, null], [36512, 38611, null], [38611, 40238, null], [40238, 45344, null], [45344, 51573, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51573, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51573, null]], "pdf_page_numbers": [[0, 811, 1], [811, 4558, 2], [4558, 10369, 3], [10369, 14926, 4], [14926, 19962, 5], [19962, 24192, 6], [24192, 28871, 7], [28871, 31231, 8], [31231, 36512, 9], [36512, 38611, 10], [38611, 40238, 11], [40238, 45344, 12], [45344, 51573, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51573, 0.01712]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
abbb20ffbfa8ca8e97dc573c07b1b9e519515d9b
Virtual-Memory Assisted Buffer Management Preprint accepted for publication at SIGMOD 2023 Viktor Leis Technische Universität München leis@in.tum.de Adnan Alhomssi Friedrich-Alexander-Universität Erlangen-Nürnberg adnan.alhomssi@fau.de Tobias Ziegler Technische Universität Darmstadt tobias.ziegler@cs.tu-darmstadt.de Yannick Loeck Technische Universität Hamburg yannick.loeck@tuhh.de Christian Dietrich Technische Universität Hamburg christian.dietrich@tuhh.de ABSTRACT Most database management systems cache pages from storage in a main memory buffer pool. To do this, they either rely on a hash table that translates page identifiers into pointers, or on pointer swizzling which avoids this translation. In this work, we propose vmcache, a buffer manager design that instead uses hardware-supported virtual memory to translate page identifiers to virtual memory addresses. In contrast to existing mmap-based approaches, the DBMS retains control over page faulting and eviction. Our design is portable across modern operating systems, supports arbitrary graph data, enables variable-sized pages, and is easy to implement. One downside of relying on virtual memory is that with fast storage devices the existing operating system primitives for manipulating the page table can become a performance bottleneck. As a second contribution, we therefore propose exmap, which implements scalable page table manipulation on Linux. Together, vmcache and exmap provide flexible, efficient, and scalable buffer management on multi-core CPUs and fast storage devices. CCS CONCEPTS • Information systems → Data management systems; Record and buffer management. KEYWORDS Database Management Systems; Operating Systems; Caching; Buffer Management ACM Reference Format: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference’17, July 2017, Washington, DC, USA © 2023 Association for Computing Machinery. ACM ISBN 978-1-4503-xxxx-xxxx/YY/MM... $15.00 https://doi.org/10.1145/nnnnnn.nnnnnn 1 INTRODUCTION DBMS vs. OS. Database management systems (DBMS) and operating systems (OS) have always had an uneasy relationship. OSs provide process isolation by virtualizing hardware access, whereas DBMSs want full control over hardware for optimal efficiency. At the same time, OSs offer services (e.g., caching pages from storage) that are almost exactly what database systems require – but for performance and semantic reasons, DBMSs often re-implement this functionality. The mismatch between the services offered by operating systems and the requirements of database systems was raised four decades ago [40], and the situation has not improved much since then. OS-controlled caching. The big advantage the OS has over a DBMS is that it runs in kernel mode and therefore has access to privileged instructions. In particular, the OS has direct control over the virtual memory page table, and can therefore do things user space processes cannot. For example, using virtual memory and the memory management unit (MMU) of the processor, the OS implements transparent page caching and exposes this by mapping storage into virtual memory through the mmap system call. With mmap, in-memory operations (cache hits) are fast, thanks to the Translation Lookaside Buffer (TLB). Nevertheless, as Crotty et al. [13] recently discussed, mmap is generally not a good fit for database systems. Two major problems of mmap are that (1) the DBMS loses control over page faulting and eviction, and that (2) the virtual memory implementation in Linux is too slow for modern NVMe SSDs [13]. The properties of mmap and alternative buffer manager designs are summarized in Table 1. DBMS-controlled caching. In order to have full control, most DBMSs therefore avoid file-backed mmap, and implement explicit buffer management in user space. Traditionally, this has been done using a hash table that contains all pages that are currently in cache [15]. Recent, more efficient buffer manager designs rely on pointer swizzling [16, 23, 33]. Both approaches have downsides: the former has non-trivial hash table translation overhead; and the latter is more difficult to implement and does not support cyclical page references (e.g., graph data). Rather than compromising on either the performance or the functionality benefits of translation, this work proposes hardware-supported virtual memory as a fundamental building block of buffer management. Contribution 1: vmcache. The first contribution of this paper is vmcache, a novel buffer pool design that relies on virtual memory, but retains control over faulting and eviction within the DBMS, Table 1: Conceptual comparison of buffer manager designs <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>transl.</td> <td>page tbl.</td> <td>hash tbl.</td> <td>invasive</td> <td>invasive</td> <td>page tbl.</td> <td>page tbl.</td> </tr> <tr> <td>control</td> <td>OS DBMS</td> <td>DBMS DBMS</td> <td>DBMS DBMS</td> <td>DBMS DBMS</td> <td></td> <td></td> </tr> <tr> <td>var. size</td> <td>easy</td> <td>hard</td> <td>hard</td> <td>med. (*)</td> <td>easy</td> <td>easy</td> </tr> <tr> <td>graphs</td> <td>yes</td> <td>yes</td> <td>no</td> <td>no</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>impl.</td> <td>easy</td> <td>hard</td> <td>hard</td> <td>easy</td> <td>fast</td> <td>fast</td> </tr> <tr> <td>in-mem.</td> <td>fast</td> <td>slow</td> <td>fast</td> <td>fast</td> <td>fast</td> <td>fast</td> </tr> <tr> <td>out-mem.</td> <td>slow</td> <td>fast</td> <td>fast</td> <td>fast</td> <td>fast</td> <td>med.</td> </tr> </tbody> </table> (*) only powers of 2 [33] (**) read-only easy, transactions hard [13] Unlike solutions based on file-backed mmap. The key idea is to map the storage device into anonymous (rather than file-backed) virtual memory and use the MADV_DONTNEED hint to explicitly control eviction. This enables fast in-memory page accesses through TLB-supported translations without handing control to the OS. Page-table-based translation also allows vmcache to support arbitrary graph data and variable-sized pages. Contribution 2: exmap. While vmcache has excellent in-memory performance, every page fault and eviction involves manipulating the page table. Unfortunately, existing OS page table manipulation primitives have scalability problems that become visible with high-performance NVMe SSDs [13]. Therefore, as a second contribution, we propose examap, an OS extension for efficiently manipulating virtual memory mappings. Examap is implemented as a Linux kernel module and is an example of DBMS/OS co-design. By providing new OS-level abstractions, we simplify and accelerate data-processing systems. Overall, as Table 1 shows, combining examap with vmcache results in a design that is not only fast (in-memory and out-of-memory) but also offers important functionality. 2 BACKGROUND: DATABASE PAGE CACHING Buffer management. Most DBMSs cache fixed-size pages (usually 4-64 KB) from secondary storage in a main memory pool. The basic problem of such a cache is to efficiently translate a page identifier (PID), which uniquely determines the physical location of each page on secondary storage, into a pointer to the cached data content. In the following, we describe known ways of doing that, including the six designs shown in Table 1. Hash table-based translation. Figure 1a illustrates the traditional way [15] of implementing a buffer pool: a hash table indexes all cached pages by their PID. A page is addressed using its PID, which always involves a hash-table lookup. On a miss, the page is read from secondary storage and added to the hash table. This approach is simple and flexible. The hash table is the single source of truth of the caching state, and pages can reference each other arbitrarily through PIDs. The downside is suboptimal in-memory performance, as even cache hits have to pay the hash table lookup cost. Also note that there are two levels of translation: from PID to virtual memory pointer (at the DBMS level), and from virtual memory pointer to physical memory pointer (at the OS/MMU level). Main-memory DBMS. One way to avoid the overhead of traditional buffer managers is to forego caching altogether and keep all data in main memory. While pure in-memory database systems can be very fast, in the past decade DRAM prices have almost stopped decreasing [18]. Storage in the form of NVMe flash SSDs, on the other hand, has become cheap (20 – 50× cheaper per byte than DRAM [18]) and fast (>1 million random 4 KB reads per second per SSD [4]). This makes pure in-memory systems economically unattractive [29], and implies that modern storage engines should combine DRAM and SSD. The challenge is supporting very large data sets on NVMe SSDs with their high I/O throughput and making cache hits almost as fast as in main-memory systems. Pointer swizzling (invasive translation). An efficient technique for implementing buffer managers is pointer swizzling. The technique has originally been proposed for object-oriented DBMSs [20], but has recently been applied to several high-performance storage engines [16, 23, 33]. As Figure 1b illustrates, the idea is to replace the PID of a cached page with its virtual memory pointer within the data structure. Page hits can therefore directly dereference a pointer instead of having to translate it through a hash table first. One way to think about this is that pointer swizzling gets rid of explicit hash table-based translation by inversely modifying the data structure itself. Pointer swizzling offers very good in-memory performance. However, it requires adaptations for every buffer-managed data structure, and its internal synchronization is quite intricate. E.g., to unswizzle a page, one needs to find and lock its parent, and Storing a parent pointer on each node presents synchronization challenges during node splits. Another downside is that pointer swizzling-based systems generally do not support having more than one incoming reference to any particular page. In other words, only tree data structures are directly supported. Graph data, next pointers in B-tree leaf pages, and multiple incoming tuple references (e.g., from secondary indexes) require inelegant and sometimes inefficient workarounds. Hardware-supported page translation. Traditional buffer managers and pointer swizzling present an unsatisfactory and seemingly inescapable tradeoff: either one pays the performance cost of the hash table indirection, or one loses the ability to support graph-like data. Instead of getting rid of the translation (as pointer swizzling does), another way of achieving efficiency is to make PID-to-pointer translation efficient through hardware support. All modern operating systems use virtual memory and, together with hardware support from the CPU, transparently translate virtual to physical addresses. Page table entries are cached within the CPU, in particular the TLB, which makes virtual memory translation fast. Figure 1c shows how hardware-supported page translation can be used for caching pages from secondary storage. OS-driven caching with file-backed mmap. Unix offers the mmap system call to access storage via virtual memory. After mapping a file or device into virtual memory, a memory access will trigger a page fault. The OS will then install that page in the page table, making succeeding page accesses as fast as ordinary memory accesses. Some systems therefore eschew implementing a buffer pool and instead rely on the OS page cache by mapping the database file/device. While this approach makes cache hits very fast, it has major problems that were recently analyzed by Crovty et al. [13]: (1) Ensuring transactional safety is difficult and potentially inefficient because the DBMS loses control over eviction. (2) There is no interface for asynchronous I/O, and I/O stalls are unpredictable. (3) I/O error handling is cumbersome. (4) OS-implemented page faulting and eviction is too slow to fully exploit modern NVMe storage devices. The lack of control over eviction for file-backed mmap approaches is a fundamental problem. Notably, it prevents the implementation of ARIES-style transactions. ARIES uses in-place writes and prevents the eviction of a dirty page before its corresponding log entry is flushed – impossible with existing OS interfaces [13]. Without explicit control over eviction, it is also impossible to implement DBMS-optimized page replacement algorithms. Thus, one is at the whim of whatever algorithm the OS currently in use implements, which is unlikely to be optimized for DBMS workloads. **DBMS-driven, virtual-memory assisted caching.** While OS-managed caching using mmap may not be a good solution for most DBMSs, the OS has one big advantage: instead of having to use an explicit hash table for page translation, it can rely on hardware support (the TLB) for page translation. This raises the following question: Is it possible to exploit the virtual memory subsystem without losing control over eviction and page fault handling? One contribution of this paper is to answer this question affirmatively. In Section 3, we describe how widely-supported OS features (anonymous memory and the MADV_DONTNEED hint) can be exploited to implement hardware-supported page translation while retaining full control over faulting and eviction within the DBMS. **Variable-sized pages.** Besides making page translation fast, using a page table also makes implementing multiple page sizes much easier. Having dynamic page sizes is obviously very useful, e.g., for storing objects that are larger than one page [33]. Nevertheless, many buffer managers only support one particular page size (e.g., 4 KB) because multiple sizes lead to complex allocation and fragmentation issues. In these systems, larger objects need to be implemented by splitting them across pages, which complicates and slows down the code accessing such objects. With control over the page table, on the other hand, a larger (e.g., 12 KB) page can be created by mapping multiple (e.g., 3) non-contiguous physical pages to a contiguous virtual memory range. This is easy to implement within the OS and no fragmentation occurs in main memory. One system that allows multiple (albeit only power-of-two) page sizes is Umbra [33]. It implements this by allocating multiple buffer pool-sized virtual memory areas – one for each page size. To allocate a page of a particular size, one can simply fault the memory from that class. To free a page, the buffer manager uses the MADV_DONTNEED OS hint. This approach gets rid of fragmentation from different page sizes, but Umbra’s page translation is still based on pointer swizzling rather than the page table. Umbra therefore inherits the disadvantages of pointer swizzling (difficult implementation, no graph data), while potentially encountering OS scalability issues. **Fast virtual memory manipulation.** While OS-supported approaches offer very fast access to cached pages and enable variable-sized pages, they unfortunately may suffer from performance problems. One problem is that each CPU core has its own TLB, which can get out of sync with the page table\(^1\). When the page table changes, the OS therefore generally has to interrupt all CPU cores and force them to invalidate their TLB (“TLB shootdown”). Another issue is that intra-kernel data structures can become the scalability bottleneck on systems with many cores. Crotty et al. [13] observed that because of these issues mmap can be slow in out-of-memory workloads. For random reads from one SSD, they measured that it achieves less than half the achievable I/O throughput. With sequential scans from ten SSDs, the gap between mmap and explicit asynchronous I/O is roughly 20x. Any virtual memory-based approach (including our basic vmcache design) will run into these kernel issues. Section 4 therefore describes a novel, specialized virtual memory subsystem for Linux called exmap, which solves these performance problems. **Persistent memory.** In this work, we focus on block storage rather than byte-addressable persistent memory, for which multiple specialized caching designs have been proposed [8, 21, 28, 41, 43]. ## 3 VMCache: Virtual-Memory Assisted Buffer Management The POSIX system call mmap usually maps a file or storage device into virtual memory, as is illustrated in Figure 1c. The advantage of file-backed mmap is that, due to hardware support for page translation, accessing cached pages becomes as fast as ordinary memory accesses. If the page translation is cached in the TLB and the data happens to be in the L1 cache, an access can take as little as 1 ns. The big downside is that the DBMS loses control over page faulting and eviction. If the page is not cached but resides on storage, dereferencing a pointer may suddenly take 10 ms because the OS --- \(^1\)The page table, which is an in-memory data structure, itself is coherent across CPU cores. However, a CPU core accessing memory caches virtual to physical pointer translations in a per-core hardware cache called TLB. If the page table is changed, the hardware does not automatically update or invalidate existing TLB entries. will cause a page fault that is transparent to the DBMS. Thus, from the point of view of the DBMS, eviction and page faulting are totaь unpredictable and can happen at any point in time. In this section, we describe vmcache, a buffer manager design that – like file-backed mmap – uses virtual memory to translate page identifiers into pointers (see Figure 1c). However, unlike mmap, in vmcache the DBMS retains control over page faults and eviction. 3.1 Page Table Manipulation Setting up virtual memory. Like the file-backed mmap, approach, vmcache allocates a virtual memory area with (at least) the same size as the backing storage. However, unlike with file-backed mmap this allocation is not directly backed by storage. Such an “unbacked” allocation is called anonymous and, confusingly, is done through mmap as well, but using the MAP_ANONYMOUS flag: ``` int flags = MAP_ANONYMOUS|MAP_PRIVATE|MAP_NORESERVE; int prot = PROT_READ | PROT_WRITE; char* virtMem = mmap(0, vmSize, prot, flags, -1, 0); ``` Note that no file descriptor has been specified here (the fourth argu- ment is -1). Storage is handled explicitly and could be a file (multiple applications share one file system) or multiple block devices (in a RAID setup). Moreover, the allocation will initially not be backed by physical memory, which is important because storage capacity is usually much larger than main memory. Adding pages to the cache. To add a page to the cache, the buffer manager explicitly reads it from storage to the corresponding posi- tion in virtual memory. For example, we can use the pread system call to explicitly read P3 as follows: ``` uint64_t offset = 3 * pageSize; pread(fd, virtMem + offset, pageSize, offset); ``` Once pread completes, a physical memory page will be installed in the page table and the data becomes visible to the DBMS process. In contrast to mmap, which handles page misses transparently without involving the DBMS, with the vmcache approach the buffer man- ger controls I/O. For example, we can use either the synchronous pread system call or asynchronous I/O interfaces such as libaio or io_uring. Removing pages from the cache. After mapping more and more pages, the buffer pool will eventually run out of physical memory, causing failing allocations or swapping. Before that happens, the DBMS needs to start evicting pages, which on Linux can be done as follows: ``` uint64_t ofs = pid * pageSize if (s.isEvicted()) s = state[pid] virtMem + ofs // page miss virtMem + ofs // page hit if (s.isEvicted()) virtMem + ofs // page miss virtMem + ofs // page hit else if (s.isMarked() || s.isUnlocked()) virtMem + ofs // page hit ``` Listing 1: Pseudo code for exclusive page access 3.2 Page States and Synchronization Basics In terms of the buffer manager implementation, the most difficult aspect is synchronization, e.g., managing races to the same page. Buffer managers must not only use scalable synchronization internally, they should also provide efficient and scalable synchro- nization primitives to the upper DBMS layers. After all, most data- base data structures (e.g., relations, indexes) are stored on top of pageable pages. Buffer pool state. In a traditional buffer manager (see Figure 1a), the translation hash table is used as a single source of truth for the caching state. Because all accesses go through the hash table, synchronization is fairly straightforward (but usually not efficient). Our approach, in contrast, needs an additional data structure for synchronization because not all page accesses traverse the page ta- ble\(^4\) and because the page table cannot be directly manipulated from user space. Therefore, we allocate a contiguous array with as many page states as entries as we have pages on storage at corresponding positions, as the following figure illustrates: <table> <thead> <tr> <th>Evicted</th> <th>Locked</th> <th>Evicted</th> <th>Unlocked</th> <th>Evicted</th> </tr> </thead> <tbody> <tr> <td>P0</td> <td>P1</td> <td>P2</td> <td>P3</td> <td>P4</td> </tr> <tr> <td>foo</td> <td>bar</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Page states. After startup, all pages are in the Evicted state. Page access operations first check their state entry and proceed according to the following state diagram: ``` Evicted fix Locked | | fix | Marked unfix fix | | fix | Unlocked | ``` \(^{3}\)Strictly speaking, the OS could decide to evict vmcache pages – but this does not affect the correctness of our design. OS-triggered eviction can be prevented by disabling swapping or by mlocking the virtual memory range. \(^{4}\)If a page translation is cached in the TLB of a particular thread, the thread does not have to consult the page table. \[^{2}\]On Windows these primitives are available as VirtualAlloc(..., MEM_RESERVE, ...) and VirtualFree(..., MEM_RELEASE). optimisticRead(uint64_t pid, Function fn): while (true) // retry until success PageState s = state[pid] // incl. version if (s.isUnlocked()) // optimistic read: fn(virtMem + (pid*pageSize)) if (state[pid] == s) // validate version return // success else if (s.isMarked()) // clear mark: state[pid].CAS(s, Unlocked) else if (s.isEvicted()) fix(pid); unfix(pid) // handle page miss Listing 2: Pseudo code for optimistic read Listing 1 shows pseudo code for the fix and unfix operations, which provide exclusive page access. Suppose we have a page that is currently in Evicted state (line 5 in the code). If a thread wants to access that page, it calls fix, which will transition to the Locked state using a compare-and-swap operation (line 6). The thread is then responsible to read the page from storage and implicitly (via pread) install it to the page table (line 7). After that, it can access the page itself and finally unfix it, which causes a transition to the Unlocked state (line 13). If another thread concurrently wants to fix the same page, it waits until it is unlocked. This serializes page misses and prevents the same page from being read multiple times. The fourth state, Marked, helps to implement a clock replacement strategy – though arbitrary other algorithms could be implemented as well. Cached pages are selected for eviction by setting their state to Marked. If the page is accessed, it transitions back to the Locked state, which clears the mark (line 10). Otherwise, the page can be evicted and eventually transitions to the Evicted state. 3.3 Advanced Synchronization So far, we discussed how to lock pages exclusively. To enable scalable and efficient read operations, vmcache also provides shared locks (multiple concurrent readers on the same page) and optimistic (lock-free) reads. Shared locks. To implement shared locks for read-only operations, we count the number of concurrent readers within the page state. If the page is not locked exclusively, read-only operations atomically increment/decrement that counter [9] when fixing/unfixing the page. Exclusive accesses have to wait until the counter is 0 before acquiring the lock. Optimistic reads. Both exclusive and shared locks write to shared memory when acquiring or releasing the lock, which invalidates cache entries in other CPU cores. For tree data structures such as B-trees this results in suboptimal scalability, because the page states of inner nodes are constantly invalidated. An elegant alternative to locks are optimistic, lock-free page reads that validate whether the read was correct. To do that, locks contain an update version that is incremented whenever an exclusively locked page is unlocked [9, 25, 30]. We store this version counter together with the page state within the same 64-bit value, ensuring that both are always changed atomically. As the pseudo code in Listing 2 shows, an optimistic reader retrieves the state and if it equals Unlocked (line 4 in the code), it reads from the page (line 5). After that we retrieve the page state again and make sure that the page is still not locked and that the version has not changed (line 6). If this check fails, the operation is restarted. Note that the version counter is incremented not just when a page changes but also when it is evicted. This is crucial for correctness and, for example, ensures that an optimistic read of a marked page that is evicted before validation will fail. To prevent starvation due to repeated restarts, it is also possible to fall back to pessimistic lock-based operations (not shown in the code). Finally, let us note that optimistic reads can be interleaved across multiple pages, enabling lock coupling-like synchronization of complex data structures like B-trees [24]. This approach has been shown to be highly scalable and outperform lock-free data structures [42]. 64-bit state entry. Overall, we use 64 bits for the page state, of which 8 bits encode the Unlocked (0), LockedShared (1-252), Locked (253), Marked (254), and Evicted (255) states. This leaves us with 56 bits for the version counter – which are enough to never overflow in practice. 64 bits are also a convenient size that allows atomic operations such as compare-and-swap (CAS). Memory reclamation and optimistic reads. In general, lock-free data structures require special care when freeing memory [25, 27, 30]. Techniques such as epoch-based memory reclamation [30] or hazard pointers [31] have been proposed to address this problem. All these techniques incur overhead and may cause additional memory consumption due to unnecessarily long reclamation delays. Interestingly, vmcache – despite supporting optimistic reads – can sidestep these problems completely. Indeed, vmcache does not prevent the eviction/reclamation of a page that is currently read optimistically. However, this is not a problem because after the page is removed from the page table using the MADV_DONTNEED hint, it is replaced by the zero page. In that situation the optimistic read will proceed loading its from the page without crashing, and will detect that eviction occurred during the version check. (The check fails because eviction first locks and then unlocks the page, which increments the version.) Therefore, vmcache does not need any additional memory reclamation scheme. Parking lot. To avoid exclusive and shared locks from wasting CPU cycles and ensure fairness under lock contention, one can use the Parking Lot [9, 36] technique. The key idea is that if a thread fails to acquire the lock (potentially after trying several times), it can “park” itself, which will block the thread until it is woken up by the thread holding the lock. Parking itself is implemented using a fixed-size hash table storing standard OS-supported condition variables [9]. Within the page state, we only need one additional bit that indicates whether there are threads that are currently waiting for that page lock to be released. The big advantage of parking lots is very low space overhead per page, which is only 1 bit instead of 64 bytes for pthread (rw)locks [9]. 3.4 Replacement Strategy Clock implementation. In principle, arbitrary replacement strategies can be implemented on top of vmcache. As mentioned earlier, our current implementation uses the clock algorithm. Before the buffer pool runs out of memory, we change the state of Unlocked pages to Marked. All page accesses, including optimistic reads, clear the Marked state, ensuring that hot pages will not be evicted. To implement clock, one needs to be able to iterate over all pages in the buffer pool. One approach to do that would be to iterate over the state array while ignoring evicted pages. However, this would be quite expensive if the state array is very sparse (i.e., storage is much larger than main memory). We implement a more robust approach that stores all page identifiers that are currently cached in a hash table. The size of the hash table is equal to the number of pages in DRAM (rather than storage) and our page replacement algorithm iterates over this much smaller data structure. We use a fixed-size open addressing hash table, which makes iteration cache efficient. Note that, in contrast to traditional buffer managers, this hash table is not accessed during cache hits, but only during page faults and eviction. Batch eviction. For efficiency reasons, our implementation evicts batches of 64 pages. To minimize exclusive locking and exploit efficient bulk-I/O, eviction is done in five steps: 1. Get batch of marked candidates from hash table, lock dirty pages in shared mode 2. Write dirty pages (using \ advise) 3. Try to lock (upgrade) clean (dirty) page candidates 4. Remove locked pages from page table using \ advise 5. Remove locked pages from eviction hash table, unlock them After step 3, all pages must be locked exclusively to avoid race conditions during eviction. For dirty pages, we already obtained shared locks in step 1, which is why step 3 performs a lock upgrade. Clean pages have not been locked, so step 3 tries to acquire the exclusive lock directly. Both operations can fail because another thread accessed the page, in which case eviction skips it (i.e., the page stays in the pool). With the basic vmcache design, step 4 is simply calling \ advise once for every page. With exmap, we will be able to exploit bulk removal of pages from the page table. ### 3.5 Page Sizes **Default page size.** Most processors use 4 KB virtual memory pages by default, and conveniently this granularity also works well with flash SSDs. It therefore makes sense to set the default buffer pool page size to 4 KB as well. x86 (ARM) also supports 2 MB (1 MB) pages, which might be a viable alternative in systems that primarily read larger blocks. With vmcache, OLTP systems should generally use 4 KB pages and for OLAP systems both 4 KB and 2 MB pages are suitable. **Supporting larger pages.** vmcache also makes it easy to support any buffer pool page size that is a multiple of 4 KB. Figure 2 shows an example where page P3 spans two physical pages. For data structures implemented on top of the buffer manager this fact is completely transparent, i.e., the memory appears to be contiguous. Accesses to large pages only use the page state of the head page (P3 not P4 in the figure). The advantage of relying on virtual memory to implement multiple page sizes is that it avoids main memory fragmentation. Note that fragmentation is not simply moved from user to kernel space, but the page table indirection allows the OS to always deal with 4 KB pages rather than having to maintain different allocation classes. As a consequence, as Figure 2 illustrates, a contiguous virtual memory range will in general not be physically contiguous. **Advantages of large pages.** Although most DBMS rely on fixed-size pages, supporting different page sizes has many advantages. One case where variable-size pages simplify and accelerate the DBMS is string processing. With variable-size pages one can, for example, simply call external string processing libraries with a pointer into the buffer pool. Without this feature, any string operation (comparison, LIKE, regexp search, etc.) needs to explicitly deal with strings chunked across several pages. Because few existing libraries support chunking, one would have to copy larger strings into a contiguous memory before being able to use them. Another case is compressed columnar storage where each column chunk has the same number of tuples but a different size. In both cases it is indeed possible to split the data across multiple fixed-size pages (and many systems have to do it due to a lack of variable-size support), but it leads to complex code and/or slower performance. Finally, let us mention that, in contrast to systems like Umbra [33], vmcache supports arbitrary page sizes as long as they are a multiple of 4 KB. This reduces memory waste for larger objects. Overall, we argue that this feature can substantially simplify the implementation of the DBMS and lead to better performance. ### 3.6 Discussion **State access.** As mentioned earlier, every page access must retrieve the page state — often causing a cache miss — before it can read the page data itself. One may therefore wonder whether this is just as inefficient as traditional hash table-based buffer managers. However, these two approaches are very different from each other in terms of their memory access patterns. In the hash table approach, the page data pointer is retrieved from the hash table itself, i.e., there is a data dependency between the two pointers and one usually pays the price of two cache miss latencies. In our approach, in contrast, both the page state pointer and data content pointer are known upfront. As a consequence, the out-of-order execution of modern CPUs will perform both accesses in parallel, hiding the additional overhead of the state retrieval. **Memory consumption.** vmcache comes with some DRAM overhead in the form of page tables and the page state array: For configuring the virtual-memory mapping, vmcache requires 8.016 bytes for each 4 KB of storage to set up a 5-level page table. Besides this cost, which is inherent to any mmap-like buffer manager, vmcache requires additional 8 bytes for the page state: 8 bits for the exclusive/shared lock and 56 bits for the optimistic-read version counter. So in total, vmcache requires around 16 bytes of DRAM per 4 KB on storage. Thus, for example, for 1 TB flash SSD, one needs 4 GB of DRAM for the internal buffer manager state, which is a reasonable $\frac{1}{256}$th of SSD capacity. Economically speaking, as Flash is approximately 50 times cheaper per byte than DRAM, the additional memory costs $\frac{50}{256} \approx 20\%$ of the flash price. While this is low enough in most use cases, there are ways to reduce this cost: (1) Compress the 64-bit page state at the expense of optimistic reads (-56 bits) and shared locking (-6 bits) down to two bits per storage page (evicted, exclusive locked), leaving us with a total of 2.07 GB for a 1 TB flash SSD (+10.11 % cost). (2) Place the page state within the buffered page and keep the corresponding 8 bytes on the storage page unused, leaving us with the unavoidable 2 GB of DRAM overhead. Thus, the memory overhead is reasonable in terms of overall cost for the system and could be reduced even further. Address space. Existing 64-bit CPUs generally support at least 48-bit virtual memory addresses. On Linux, half of that is reserved for the kernel, and user-space virtual memory allocations are therefore limited to $2^{47} = 128$ TB. Starting with Ice Lake, Intel processors support 57-bit virtual memory addresses, enabling a user-space address space size of $2^{56} = 64$ PB. Thus, the address space is large enough for our approach, and will be so for the foreseeable future. 4 EXMAP: SCALABLE AND EFFICIENT VIRTUAL MEMORY MANIPULATION vmlinux exploits hardware-supported virtual memory with explicit control over eviction while supporting flexible locking modes, variable-sized pages, and arbitrary reference patterns (i.e., graphs). This is achieved by relying on two widely-available OS primitives: anonymous memory mappings and an explicit memory-release system call. Although vmlinux is a practical and useful design, with some workloads it can run into OS kernel performance problems. In this section, we describe a Linux kernel extension called exmap that solves this weakness. We first motivate why the existing OS implementation is not always sufficient, then provide a high-level overview of the design, and finally describe implementation details. 4.1 Motivation Why Change the OS? With vmlinux, (de)allocating 4 KB pages is as frequent as page misses and evict operations, i.e., the OS’ memory subsystem becomes part of the hot path in out-of-memory workloads. Unfortunately, Linux’ implementation of page allocation and deallocation does not scale. As a consequence, workloads that have a high page turn-over rate can become bottlenecked by the OS’s virtual memory subsystems rather than the storage device. To quantify the situation on Linux, we allocate pages on a single anonymous mapping by triggering a page fault and evict them again with MADV_DONTNEED. As Figure 3 shows, vanilla Linux only achieves 1.51M OP/s with 128 threads. Incidentally, a single modern PCIe 4.0 SSD can achieve 1.5M random 4KB reads per second [4]. In other words, a 128-thread CPU would be completely busy manipulating virtual memory for one SSD – not leaving any CPU cycles for actual work. Problem 1: TLB shootdowns. To investigate this poor scalability, we used the perf profiling tool and show a flame graph [17] in Figure 4. Linux spends 79% of all CPU time in the flush_tlb_mm_range function. It implements TLB shootdowns, which are an explicit coherency measure that prevents outdated TLB entries, which otherwise could lead to data inconsistencies or security problems. On changing the page table, the OS sends an interprocessor interrupt (IPI) to all other (N-1) cores running application threads, which then clear their TLB. This is fundamentally unscaleable as it requires N-1 IPIs for every evicted page. Problem 2: Page allocation. After shootdowns, the next major performance problem in Linux is the intra-kernel page allocator (free pages and alloc page in the flame graph). The Linux page allocator relies on a centralized, unscaleable data structure and, for security reasons, has to zero out each page after eviction. Therefore, once the larger TLB shootdown bottleneck is solved, workloads with high page turn-over rates will be bound by the page allocator. Why a New Page Table Manipulation API? The two performance problems described above cannot be solved by some low-level changes within Linux, but are fundamentally caused by the existing decades-old virtual memory API and semantics: The TLB shootdowns are unavoidable with a synchronous page-at-a-time API, and page allocation is slowed down by the fact that physical memory pages can be shared between different user processes. Achieving efficient and scalable page table manipulation therefore requires a different virtual memory API and modified semantics. 4.2 Design Principles exmap. exmap is a specialized Linux kernel extension that enables fast and scalable page table manipulation through a new API and efficient kernel-level implementation. We co-designed exmap for use with vmlinux, but as we discuss in Section 4.5, it could also be used to accelerate other applications. exmap comes as a Linux kernel module that the user can load into any recent Linux kernel without rebooting. Like the POSIX interface, exmap provides primitives for setting up virtual memory, allocating, and freeing pages. However, as outlined below, exmap has new semantics to eliminate the bottlenecks provoked by the POSIX interface. In Linux, when a page is freed, it is returned to a system-wide free list. This list is maintained by the Linux page allocator, which is responsible for managing physical memory allocation. Linux currently allows for only one page allocation operation to be performed at a time, which can lead to performance bottlenecks. The exmap API provides a way to manage physical memory more efficiently. It allows for multiple page allocation operations to be performed simultaneously, thereby reducing the number of page allocation operations and improving performance. Figure 5: exmap implementation overview: The VM Surface (A) is manipulated with explicit free, alloc, read, or write system calls. Each per-thread control interface (B) owns part of the exmap-local memory pool, which exists as interfacelocal free lists of physical pages (C). If an interface runs out of pages (1), it steals pages from another interface (2). Pages only circulate (X) between the surface and the interface. Solving TLB shootdown problem. An effective way of reducing the number of TLB shootdowns in batch page evictions and thereby reduce the number of shootdowns by the batch size. To achieve this, exmap provides a batching interface to free multiple pages with a single system call. While batching is easy to exploit for a buffer manager when evicting pages, it can be problematic to batch page allocations because these are often latency critical. To avoid TLB shootdowns on allocation, exmap therefore ensures that allocation does not require shootdowns at all. To do this, exmap always reads-protection the page table entry for a free page (by setting a specific bit in the page table entry). Linux, in contrast, sets that entry to a write but not read-protected zero page – potentially causing invalid TLB entries that have to be explicitly invalidated on allocation. This subtle change eliminates the need for shootdowns on allocation completely. Solving the page allocation problem. Another important difference between Linux and exmap is the page allocation mechanism. In Linux, when a page is freed, it is returned to a system-wide pool (and thereby potentially to other processes). This has two drawbacks: (1) page allocation does not scale well and (2) pages are repeatedly zeroed out for security reasons. exmap, in contrast, pre-allocates physical memory at creation and keeps them in scalable thread-local memory pools – thereby avoiding both bottlenecks. 4.3 Overview and Usage Implementation overview. Figure 5 illustrates the three major components of an exmap object: (A) its surface within the virtual memory (VM); (B) a number of control interfaces to interact with the object; and (C) a private memory pool of physical DRAM pages, which exists as interface-local free lists spread over all interfaces. Creation. On creation (lines 4-8 in Listing 3), the user configures these components: She specifies the number of interfaces that the kernel should allocate (line 5). Usually, each thread should use its own interface (e.g., thread id = interface id) to maximize scalability. The user also specifies the number of memory pool pages (line 6), which exmap will drain from Linux’ page allocators for the lifetime of the exmap object. As the third parameter, the user can specify a file descriptor as backing storage for read operations (line 7). Operations. After creation, the process makes the exmap surface visible within its VM via mmap (line 10). While an exmap can have an arbitrary VM extent, it can be mapped exactly once in the whole system. On the mapped surface, we allow the vectorized and scattered allocation of pages on the exmap surface (line 11 and Figure 5(X)). For this, one specifies a vector of page ranges within the mapped surface and issues an EXMAP_ALLOC command at an explicitly-addressed interface. The required physical pages are first drawn from the specified interface (Figure 5(1)), before we steal memory from other interfaces (Figure 5(2)). Once allocated, pages are never swapped out and, therefore, accesses will never lead to a page fault, providing deterministic access times. With the free operation (line 18), we free the page ranges and release the removed physical pages to the specified interface. Read I/O. In contrast to file-backed mmap, we do not page in or write back data transparently, but the user (e.g., vmcache) explicitly invokes read and write operations on the surface. To speed up these operations, we integrated exmap with the regular Linux I/O subsystem, whereby an exmap file descriptor becomes a proxy for the specified backing device (lines 19-22). This allows combining page allocation and read operations in a single system call: On read, exmap first populates the specified page range with memory before it uses the regular Linux VFS interface to perform the actual read. Since we derive the disk offset from the on-surface offset, we can use offset parameter to specify the allocation interface. With this integration, exmap supports synchronous (pread) and asynchronous (libaio and io_uring) reads. Furthermore, as the on-surface offset determines the disk offset, vectorized reads (preadv, IORING_OP_READV) implicitly become scattered operations (line 22), which Linux currently allows with no other system call. Write I/O. On the write side, we actively decided against a write-proxy interface, which would, for example, bundle the write back and page evict. While such a bundling is not necessary as the user can already write surface pages to disk (line 24), freeing pages for each write individually could, if not used correctly, lead to unnecessary overheads. Therefore, we decoupled write back and (batched) freeing of pages. 4.4 Implementation Details Scalable page allocator. Usually, when the kernel unmaps a page, it returns the page to the system-wide buddy allocator, which possibly merges it into larger chunks of physical memory. On allocation, these chunks are broken down again into pages, which have to be zeroed before mapping them to the user space. Therefore, with a high VM turn-over rate, memory is constantly zeroed and circles between the VM subsystem and the buddy allocator. To optimize VM operations for vmcache, we decided to use per-exmap memory pools to bypass the system allocator. This also allows us to avoid proactive page zeroing since pages only circulate between the surface and the memory pool within the same process, whereby information leakage to other processes is impossible. Only during the initial exmap creation, we zero the pages in our memory pool. Thread-local control interfaces and page stealing. Furthermore, exmap’s control interfaces not only allow the application to express allocation/eviction locality, but they also reduce contention and false sharing that come with a centralized allocator. For this, we distribute the memory pool as local lists of free 4 KB pages over the interfaces, whereby the need for page stealing comes up. After the interface-local free list is drained, we use a three-tiered page-stealing strategy: (1) steal from the interface from which we have successfully stolen the last time, (2) randomly select two interfaces and steal from the interface with more free pages, and (3) iterate over all interfaces until we have gathered enough pages. To minimize the number of steal operations, we steal more pages than required for the current operation. If we remove pages from the surface, we always push them to the specified interface. Thereby, for workloads in which per-interface allocation and eviction are in balance, steal operations are rarely necessary. Lock-free page-table manipulation. For page-table manipulations, Linux uses a fine-grained locking scheme that locks the last level of the page table tree to update page-table entries therein. However, such entries have machine-word size on most architectures, and we can update them directly with atomic instructions. While Linux leaves this opportunity open for portability reasons, we integrated an atomic-exchange-based hot path: If an operation manipulates only an individual page-table entry on a last-level page table, we install (or remove) the VM mapping with a single compare-and-exchange. I/O subsystem integration. For read operations, the Linux I/O subsystem is optimized for sequential reads into destination buffers that are already populated with physical memory. For example, without exmap, Linux does not provide a scattered read operation that takes multiple offsets; such a read request had to be split into multiple (unrelated) reads. On a lower level, Linux expects VM to be populated and calls the page-fault handler for each missing page before issuing the actual device operation. Hence, Linux cannot fully exploit scattered request patterns, but it handles them as individual requests which provokes unnecessary overheads (i.e., repeated page-table locking, allocator invocations). To avoid this, exmap provides vectorized and scattered reads with the proxy file descriptor. This allows us to (1) pre-populate the VM with memory, which avoids the page-fault handler path, and (2) cuts down the system-call overhead as we issue only a single system call per request batch. Multiple exmaps. A process can create multiple exmap objects, which are mapped as separate non-overlapping virtual-memory areas (VMAs) into the process address space. These VMAs come with their own VM subsystem and are largely isolated from each other and from the rest of the kernel while still ensuring consistency and privilege isolation. As already noted, each exmap can be mapped exactly once, whereby we avoid the bookkeeping overhead of general-purpose solutions. 4.5 Discussion OS customization. exmap is a new low-level OS interface for manipulating virtual memory efficiently. Seemingly minor semantic changes such as batching and avoiding zero pages result in very high performance without sacrificing security. One analogy is that exmap is for VM what 0 DIRECT is for I/O: a specialized tool for systems that want to manage and control hardware resources themselves as efficiently as possible. Two design decisions of exmap require further discussion. Functionality. We largely decoupled the exmap surface and its memory pool from the rest of Linux. As a consequence of this lean design, exmap is efficient but does not support copy-on-write forking and swapping. Few buffer pool implementations rely on such functionality. Indeed, it is actually a benefit that exmap behavior is simple and predictable as it allows buffer managers to precisely track memory consumption and ensure robust performance. Portability. Another important aspect is generalizability to other operating systems and architectures. Since our kernel module comes with its own specialized VM subsystem, it only has few dependencies to the rest of the Linux kernel. This makes exmap easily portable between Linux versions and suggests that the concept can be implemented for other operating systems such as Windows and FreeBSD. Except for our architecture-dependent lock-free short-cut for small page table modifications, the exmap implementation is also independent of the used ISA and MMU as it reuses Linux’ MMU abstractions. In other words, our Linux implementation is easily portable across CPU architectures that support Linux. Other Applications of exmap. Although we explicitly designed exmap for caching, it has other use cases as well: (1) Due to its high VM-modification performance (see Figure 8), a heap manager could use a large exmap surface to coalesce free pages into large contiguous buffers, which is useful for DBMS query processing [14]. (2) With a page-move extension, a language run-time system could use exmap as a base for a copying garbage collector for pools of page-aligned objects. (3) For large-scale graph processing, workers request, often with a high fan out (e.g., for breadth-first search) and with high parallelism, randomly-placed data from the backing store, which can easily be serviced by exmap. (4) For user-space file --- 5For example, Linux usually maintains a reverse mapping from physical to virtual addresses that is necessary to implement features such as copy-on-write fork. systems, a device-baked exmap allows for a user-space–controlled buffer cache strategy. 5 EVALUATION The goal of this section is to show experimentally that vmcache is competitive to state-of-art swizzling-based buffer managers for in-memory workloads and that exmap enables the vmcache design to exploit modern storage devices. However, let us emphasize here that we see the main benefits of vmcache as qualitative rather than quantitative as we summarized in Table 1. Specifically, despite being easy to implement, vmcache supports arbitrary (graph) data and variable-size pages. 5.1 Experimental Setup Implementation. Our buffer manager is implemented in C++ and uses a B+tree with variable-size keys/payloads and optimistic lock coupling. We compare two variants: (1) vmcache uses the regular and unmodified OS primitives described in Section 3. (2) vmcache+exmap is based on the vmcache code, except that it uses the exmap kernel module and the interface proposed in Section 4. Both variants use 4 KB pages and perform reads through the blocking pread system call. Therefore, there is at most one outstanding read I/O operation per thread. Dirty pages are written in batches of up to 64 pages using madvise. vmcache frees those pages individually with madvise, while vmcache+exmap batches them into a single EXMAP_FREE call. Page allocations are not batched to avoid increasing latencies (we use one EXMAP_ALLOC call per allocation). Competitors. We use three state-of-art open source storage engines based on B+trees as competitors: (1) LeanStore [1], (2) WiredTiger 3.2.1 [5], and (3) LMDB 0.9.24 [2]. For caching, LeanStore and WiredTiger rely on pointer swizzling, whereas LMDB [2] uses mmap with out-of-place writes. Since the focus of this work is buffer management, in all systems we disable write ahead logging and run in the lowest transactional isolation level offered. LMDB and LeanStore use 4 KB pages, whereas WiredTiger uses 32 KB pages for leaf nodes on storage. We configured LeanStore to use 8 page provider threads that handle page replacement [19], which resulted in the best performance. Hardware, OS. We ran all experiments on a single-socket server with an AMD EPYC 7713 processor (64 cores, 128 hardware threads) and 512 GB main memory, of which we use 128 GB for caching. For storage, we use a 3.8 TB Samsung PM1733 SSD. The system is running unmodified Linux 5.16, except when we run vmcache+exmap, which uses our exmap kernel module. Workloads. We use TPC-C as well as a key/value workload that consists of random point lookups, 8 byte uniformly-distributed keys, and 120 byte values. The two benchmarks are obviously very different from each other: TPC-C combines complex access patterns and is write-heavy, while the lookup benchmark is simple and read-only. Both are implemented as standalone C++ programs linked against the storage engines, i.e., there is no network overhead. 5.2 End-To-End In-Memory Comparison vmcache performance and scalability. In the first experiment we investigate the performance and scalability in situations where the data set fits into main memory. The results are shown in Figure 6. The two vmcache approaches are faster than the other systems and scale very well – achieving almost 90 M lookups/s and around 3 M TPC-C transactions/s respectively. Because no page eviction happens for in-memory workloads, we see that exmap does not offer major performance benefits over the basic vmcache design. Competitor performance. LeanStore comes closest to vmcache in performance, while WiredTiger trails significantly. LMDB is competitive to LeanStore for the lookup benchmark but does not scale on the write-heavy TPC-C benchmark. This is because LMDB uses a single writer model with out-of-place writes, which means that reads do not have to synchronize, but only a single writer is admitted at any point in time. Overall, the results show that the vmcache design has excellent scalability and high absolute performance. 5.3 End-To-End Out-of-Memory Comparison Workload. Figure 7 shows the out-of-memory performance (upper plot) over time. In this experiment, the data sets are larger than the buffer pool by one order of magnitude, which means that page misses happen frequently. We start measuring right after loading the data for both workloads. Therefore, in all systems it takes some time for the performance to converge to the steady state because the buffer pool state needs to adjust to the switch from loading to the actual workload. vmcache and exmap. For the random lookup benchmark, we see that exmap improves performance over basic vmcache by about 60%. This is caused by Linux scalability issues during page eviction. For TPC-C, the difference between the vmcache and vmcache+exmap is small because even vmcache manages to become I/O bound. For both workloads, the exmap variant manages to become fully I/O bound, as is illustrated by the lower part of the figure6. LeanStore. When we compare LeanStore with vmcache and exmap, we see that vmcache is substantially slower than LeanStore for random lookups in steady state (again due to vmcache being bound by --- 6We measured the I/O bound for this experiment using the f io benchmarking and 128 threads doing synchronous random I/O operations. the kernel). Only by using the exmap module, can it become competitive to LeanStore. Eventually, exmap+vmcache performs similarly to LeanStore and both become I/O bound in steady state. The performance differences are largely due to minor implementation differences: vmcache+exmap has slightly higher steady state performance due to a more compact B-tree (less I/O per transaction), and LeanStore temporarily (40s to 90s) outperforms vmcache+exmap due to more aggressive dirty page eviction using dedicated background threads. **WiredTiger and LMDB.** WiredTiger and the mmap-based LMDB are significantly slower than vmcache and LeanStore. The performance of WiredTiger suffers from the 32 KB page size, whereas LMDB is bound by kernel overhead (random lookups) and the single-writer model (TPC-C). Overall, we see that while basic vmcache offers solid out-of-memory performance, as the number I/O operations per second increases it requires the help of exmap to unlock the full potential of fast storage devices. ### 5.4 vmcache Ablation Study To better understand the performance of virtual-memory assisted buffer management and compare it against a hash table-based design, we evaluated page access time using a microbenchmark. We focus on the in-memory case, which is why all page accesses in this experiment are hits. For all designs, we read random 4 KB pages of main memory and report the average number of instructions, cache misses, and the access latency. We report numbers of 32 KB and 128 GB of data. The former corresponds to very hot CPU-cache resident pages and the latter to colder pages in DRAM. Line #1 in ![Figure 7: Out-of-memory performance and I/O statistics (128 GB buffer pool, 128 threads, random lookup: 5 B entries ≈ 1 TB, TPC-C: 5000 warehouses ≈ 1 TB)](image) ### 5.5 exmap Allocation Performance **Allocation benchmark.** The end-to-end results presented so far have shown that exmap is more efficient than the standard Linux page table manipulation primitives. However, because we were I/O bound, we have yet to evaluate how fast exmap actually is. To quantify the performance of exmap, we used similar allocation benchmark scenarios as in Figure 3, i.e., we constantly allocate and free pages in batches. **Baselines.** The results are shown in Figure 8. For these, we always use batched allocations/evictions of 512 individual 4 KB pages. As a baseline, we use process_madvise with TLB batching, which already requires kernel changes. For reference, we also show the maximal DRAM read rate, which we achieved using the pmbw benchmarking tool and 64 threads (144.56 GB/s). If the OS provides memory faster than this threshold, we can be sure that memory allocation will not be the bottleneck. **Page stealing scenarios.** exmap uses page stealing, and its performance therefore depends on the specific inter-thread allocation <table> <thead> <tr> <th>#</th> <th>inst- truc.</th> <th>cache miss</th> <th>time [ns]</th> <th>inst- truc.</th> <th>cache miss</th> <th>time [ns]</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>read</td> <td>3.0</td> <td>0.16</td> <td>3.3</td> <td>1.0</td> <td>219</td> </tr> <tr> <td>2.1</td> <td>read (1 TB range)</td> <td>3.0</td> <td>0.16</td> <td>3.3</td> <td>1.0</td> <td>235</td> </tr> <tr> <td>2.2</td> <td>+ page state</td> <td>7.0</td> <td>0.17</td> <td>7.4</td> <td>2.0</td> <td>236</td> </tr> <tr> <td>2.3</td> <td>+ version check</td> <td>10.0</td> <td>0.18</td> <td>10.4</td> <td>2.0</td> <td>236</td> </tr> <tr> <td>3</td> <td>hash table</td> <td>26.1</td> <td>0.10</td> <td>27.9</td> <td>2.6</td> <td>336</td> </tr> </tbody> </table> Table 2 simply shows the random access time in an 32 KB/128 GB array and therefore represents the lower bound for any buffer manager design. The next three lines incrementally show the steps (described in Section 3.1, Section 3.2, and Section 3.3) necessary in the vmcache design. In line #2.1, we randomly read from a virtual memory range of 1 TB (instead of 128 GB), which increases latency by 7% due to additional TLB pressure. In line #2.2, in addition to accessing the pages themselves, we also access the page state array as is required by the vmcache design. As mentioned in Section 3.6, this additional cache miss does not noticeably increase access latency because both memory accesses are independent and the CPU therefore performs them in parallel. In line #2.3, we also include the version validation, which results in the full vmcache page access logic. Overall, this experiment shows that a full optimistic read in vmcache incurs less than 8% overhead in comparison with a simple random memory read. We measured that an exclusive, uncontended page access (fix & unfix) on 128 GB of RAM takes 238ns (not shown in the table). The last line in the table shows the performance of a hash table-based implementation based on open addressing. Even such a fast hash table results in substantially higher latencies because the page pointer is only obtained after the hash table lookup. Note that our hash table implementation is not synchronized, and the shown overhead is therefore actually a lower bound for the true cost of any hash table based design. the respective system-call interface. For the \texttt{io\_uring} variant, we use thread-local submission queues and allow each thread to have 256 outstanding in-flight operations. We submit each read as an individual operation and do not use exmap’s scattered and vectorized read capability. For \texttt{vmcache}, we use \texttt{process\_madvise} with TLB batching for eviction, and for exmap, we read and evict at the same exmap interface. Since the SSD handles up to 128 parallel requests and has a maximum random-read throughput of 6 GiB/s, we are interested in which strategy can saturate it and how many threads it requires for this. **I/O performance.** In Figure 9, we see that the \texttt{pread} variants, where each thread has at most one read operation in flight, cannot saturate the SSD. Nevertheless, both \texttt{vmcache} and exmap closely follow the throughput of the fixed-buffer variant, and we can conclude that our \texttt{vmcache} concept is not the limiting factor here. When using \texttt{io\_uring}, where a single thread could already submit enough parallel reads to theoretically saturate the SSD, all three variants reach the maximum of 6 GiB/s at some point. With fixed buffers, 3 threads already saturate the SSD with 1.58 MiOP/s random reads. When using the regular Linux system-call interface to implement a \texttt{vmcache}, we require 11 threads to reach the same level. With a single thread, we reach 40 percent of the fixed-buffer performance. Even better, with exmap and \texttt{io\_uring}, we only require 4 threads to reach 6 GiB/s and with three threads it is already at 96 percent. With a single thread, exmap achieves 66 percent of the single-threaded FB variant. We thus argue that in the modern hardware landscape in which multiple SSDs can be used, exmap is a perfect fit for buffer management. Both \texttt{vmcache} and exmap work with off-the-shelf asynchronous I/O in Linux. Furthermore, exmap minimizes virtual memory overhead and follows the performance of the upper-bound implementation (FB) very closely. **5.7 exmap Ablation Study** **VM optimizations.** Let us now quantify the impact of the exmap optimizations we presented in Section 4.4. For this, we perform an ablation study that is representative for scenarios with a high VM turn-over rate. The left-hand side of Figure 10 shows how the individual techniques contribute to exmap’s performance. For the most We already described prior work on buffer management in Section 2, and gain another 35% with batches of 512 scattered reads. TLB batching, exmap reaches instances) and on using database concepts and systems to simplify intensive systems on many-core CPUs. The focus of DBOS [38] is design projects. MxKernel [3] is a runtime system [32] for data-DBMS/OS co-design. 6 RELATED WORK We already described prior work on buffer management in Section 2, so let us now discuss related work on virtual memory and operating systems. Exploiting VM in DBMS. Besides caching, virtual memory manipulation has also been shown to be useful in other database use cases such as query processing [37], and for implementing dynamic data structures such as Packed Memory Arrays [26]. In multi-threaded situations, these applications may run into kernel scalability issues and would therefore likely benefit from the optimizations we propose in Section 4. DBMS/OS co-design. Let us mention two recent DBMS/OS co-design projects. MxKernel [3] is a runtime system [32] for data-intensive systems on many-core CPUs. The focus of DBOS [38] is on cloud orchestration (i.e., managing and coordinating multiple instances) and on using database concepts and systems to simplify this task. Again, a technique like exmap is orthogonal to both designs and could be exploited by them. Optimizing TLB shootdowns. The operating systems community has identified TLB shootdowns as a major performance problem and has proposed several techniques, including batching, for mitigating them [6, 7, 22]. exmap uses the same batching idea to speed up VM manipulation. Incremental VM improvements. Existing work on improving the Linux VM subsystem can be split into two general categories: (1) speed up the existing infrastructure and (2) provide new VM management systems. For the first, Song et al. [39] modify the allocation strategy in the page fault handler. Freed pages are saved in application-local lists instead of being directly returned to the system, which enables the recycling of pages within an application. With exmaps, we extend this to explicitly-addressed free lists to avoid contention within the allocation path. Additionally, they batch write-back operations to mitigate the overhead of the write I/O path. Choi et al. [10] use cache removed VMAs for future use instead of deleting them immediately on munmap. They also extend the memory hinting system of madvise, adding new functionality like asynchronous map-ahead. Overall, the speedups of both these incremental approaches are limited because of the complex and general nature of the Linux VM subsystem. Another bottleneck of Linux’ VM system is the management of the VMA list, which is a stored lock-protected red-black tree. Bonsai [11] uses an RCU-based binary tree to provide lock-free page faults. In follow-up work, RadixVM [12] speeds up mapping operations in non-overlapping address ranges. As exmap and vmcache only use a single long-living VMA and memory is not implicitly allocated through page faults, we do not expect significant speedups although they are orthogonal to our approach. New VM subsystems. An alternative to incremental changes is to develop a specialized Linux VM subsystem. In UMap [35], memory-mapped I/O is handled entirely in user-space using userfaultfd. With memory hints for prefetching, caching and evicting, as well as configurable page sizes, they achieve a speedup of up to 2.5 times compared to unmodified Linux. UMap, similar to our exmap approach, manages separate regions that bypass the memory management of Linux. The approach also gives the application more control by providing configurable thresholds to influence the eviction strategy. Unlike vmcache, however, the kernel still controls page eviction. Furthermore, user-level page-fault handling introduces system-call overheads that run counter the goal of improving VM speeds. Papagiannis et al. identify bottlenecks in Linux’s VM system and propose FastMap [34] as an mmap alternative for implicit memory-mapped file I/O. They alleviate lock contention through per-core free page lists as well as separate clean and dirty page tables. They also identify TLB invalidation as a limiting factor to scalability, which they also solve via batched TLB shootdowns. Overall, their implementation is up to 5 times faster than unmodified Linux, and provides up to 11.8 times more random IOPS/s. Though significantly faster than Linux’ mmap, both UMap and FastMap offer no explicit control over page eviction, which makes them unattractive for database systems. vmcache. In this paper, we propose virtual-memory assisted, but DBMS-controlled buffer management. By exploiting virtual memory, vmcache is not only fast and scalable, but is also easy to implement, enables variable-size pages, and supports graph data. The basic vmcache design only relies on widely-available OS features and is therefore portable. This combination of features makes vmcache applicable to a wide variety of data management systems. vmcache is available at https://github.com/viktorleis/vmcache. exmap. With fast storage devices, the page table manipulation primitives that vmcache relies on can become a performance bottleneck. To solve this problem, we propose exmap, a specialized OS interface to support fast page table manipulation. We implemented exmap as a Linux kernel module that is highly efficient and scalable. When one combines vmcache with exmap, one can fully exploit even very fast storage devices. exmap is available at https://github.com/tuhhoosg/exmap. ACKNOWLEDGMENTS The roots of this project lie in discussions at Dagstuhl Seminar 21283 “Data Structures for Modern Memory and Storage Hierarchies”. This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 447457559, 46898364, 501887536. REFERENCES [37] Felix Martin Schulknecht, Jens Dittrich, and Ankur Sharma. 2016. RUMA has it: Rewired User-space Memory Access is Possible! PVLDB 9, 10 (2016), 764–779.
{"Source-Url": "https://www.cs.cit.tum.de/fileadmin/w00cfj/dis/_my_direct_uploads/vmcache.pdf", "len_cl100k_base": 15683, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 48717, "total-output-tokens": 18706, "length": "2e13", "weborganizer": {"__label__adult": 0.0003800392150878906, "__label__art_design": 0.0006208419799804688, "__label__crime_law": 0.0003066062927246094, "__label__education_jobs": 0.0013456344604492188, "__label__entertainment": 0.0001742839813232422, "__label__fashion_beauty": 0.00020766258239746096, "__label__finance_business": 0.0004835128784179687, "__label__food_dining": 0.00035119056701660156, "__label__games": 0.0009927749633789062, "__label__hardware": 0.0075225830078125, "__label__health": 0.0005297660827636719, "__label__history": 0.0005736351013183594, "__label__home_hobbies": 0.00015687942504882812, "__label__industrial": 0.0009436607360839844, "__label__literature": 0.0002944469451904297, "__label__politics": 0.00035500526428222656, "__label__religion": 0.00061798095703125, "__label__science_tech": 0.44482421875, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.021942138671875, "__label__software_dev": 0.51611328125, "__label__sports_fitness": 0.00023186206817626953, "__label__transportation": 0.0006566047668457031, "__label__travel": 0.0002117156982421875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 78064, 0.02809]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 78064, 0.19301]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 78064, 0.89538]], "google_gemma-3-12b-it_contains_pii": [[0, 5453, false], [5453, 12875, null], [12875, 18120, null], [18120, 22940, null], [22940, 29392, null], [29392, 35553, null], [35553, 40929, null], [40929, 46217, null], [46217, 53161, null], [53161, 58438, null], [58438, 63369, null], [63369, 65787, null], [65787, 70377, null], [70377, 78064, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5453, true], [5453, 12875, null], [12875, 18120, null], [18120, 22940, null], [22940, 29392, null], [29392, 35553, null], [35553, 40929, null], [40929, 46217, null], [46217, 53161, null], [53161, 58438, null], [58438, 63369, null], [63369, 65787, null], [65787, 70377, null], [70377, 78064, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 78064, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 78064, null]], "pdf_page_numbers": [[0, 5453, 1], [5453, 12875, 2], [12875, 18120, 3], [18120, 22940, 4], [22940, 29392, 5], [29392, 35553, 6], [35553, 40929, 7], [40929, 46217, 8], [46217, 53161, 9], [53161, 58438, 10], [58438, 63369, 11], [63369, 65787, 12], [65787, 70377, 13], [70377, 78064, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 78064, 0.06687]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ee4f9f75fb139f7c44cef3dcb89b246cd045fd5b
mTCP: A Highly Scalable User-level TCP Stack for Multicore Systems EunYoung Jeong, Shinae Woo, Muhammad Jamshed, Haewon Jeong Sunghwan Ihm*, Dongsu Han, and KyoungSoo Park KAIST *Princeton University Abstract Scaling the performance of short TCP connections on multicore systems is fundamentally challenging. Although many proposals have attempted to address various shortcomings, inefficiency of the kernel implementation still persists. For example, even state-of-the-art designs spend 70% to 80% of CPU cycles in handling TCP connections in the kernel, leaving only small room for innovation in the user-level program. This work presents mTCP, a high-performance user-level TCP stack for multicore systems. mTCP addresses the inefficiencies from the ground up—from packet I/O and TCP connection management to the application interface. In addition to adopting well-known techniques, our design (1) translates multiple expensive system calls into a single shared memory reference, (2) allows efficient flow-level event aggregation, and (3) performs batched packet I/O for high I/O efficiency. Our evaluations on an 8-core machine showed that mTCP improves the performance of small message transactions by a factor of 25 compared to the latest Linux TCP stack and a factor of 3 compared to the best-performing research system known so far. It also improves the performance of various popular applications by 33% to 320% compared to those on the Linux stack. 1 Introduction Short TCP connections are becoming widespread. While large content transfers (e.g., high-resolution videos) consume the most bandwidth, short “transactions” \(^1\) dominate the number of TCP flows. In a large cellular network, for example, over 90% of TCP flows are smaller than 32 KB and more than half are less than 4 KB [45]. Scaling the processing speed of these short connections is important not only for popular user-facing online services [1, 2, 18] that process small messages. It is also critical for backend systems (e.g., memcached clusters [36]) and middleboxes (e.g., SSL proxies [32] and redundancy elimination [31]) that must process TCP connections at high speed. Despite recent advances in software packet processing [4, 7, 21, 27, 39], supporting high TCP transaction rates remains very challenging. For example, Linux TCP transaction rates peak at about 0.3 million transactions per second (shown in Section 5), whereas packet I/O can scale up to tens of millions packets per second [4, 27, 39]. Prior studies attribute the inefficiency to either the high system call overhead of the operating system [28, 40, 43] or inefficient implementations that cause resource contention on multicore systems [37]. The former approach drastically changes the I/O abstraction (e.g., socket API) to amortize the cost of system calls. The practical limitation of such an approach, however, is that it requires significant modifications within the kernel and forces existing applications to be re-written. The latter one typically makes incremental changes in existing implementations and, thus, falls short in fully addressing the inefficiencies. In this paper, we explore an alternative approach that delivers high performance without requiring drastic changes to the existing code base. In particular, we take a clean-slate approach to assess the performance of an untethered design that diverts the limitation of the kernel implementation. To this end, we build a user-level TCP stack from the ground up by leveraging high-performance packet I/O libraries that allow applications to directly access the packets. Our user-level stack, mTCP, is designed for three explicit goals: 1. Multicore scalability of the TCP stack. 2. Ease of use (i.e., application portability to mTCP). 3. Ease of deployment (i.e., no kernel modifications). Implementing TCP in the user level provides many opportunities. In particular, it can eliminate the expensive system call overhead by translating syscalls into inter-process communication (IPC). However, it also in- We first review the major inefficiencies in existing TCP implementations and proposed solutions. We then discuss our motivation towards a user-level TCP stack. <table> <thead> <tr> <th></th> <th>Accept queue</th> <th>Conn. Locality</th> <th>Socket API</th> <th>Event Handling</th> <th>Packet I/O</th> <th>Application Modification</th> <th>Kernel Modification</th> </tr> </thead> <tbody> <tr> <td>PSIO [12], DPDK [4], PF_RING [7], netmap [21]</td> <td>No TCP stack</td> <td>None</td> <td>BSD socket</td> <td>Syscalls</td> <td>Per packet</td> <td>Batched</td> <td>No (NIC driver)</td> </tr> <tr> <td>Linux-2.6</td> <td>Shared</td> <td>None</td> <td>BSD socket</td> <td>Syscalls</td> <td>Per packet</td> <td>Transparent</td> <td>No</td> </tr> <tr> <td>Linux-3.9</td> <td>Per-core</td> <td>None</td> <td>BSD socket</td> <td>Syscalls</td> <td>Per packet</td> <td>Add option SO_REUSEPORT</td> <td>No</td> </tr> <tr> <td>Affinity-Accept [37]</td> <td>Per-core</td> <td>Yes</td> <td>BSD socket</td> <td>Syscalls</td> <td>Per packet</td> <td>Transparent</td> <td>Yes</td> </tr> <tr> <td>MegaPipe [28]</td> <td>Per-core</td> <td>Yes</td> <td>lwsocket</td> <td>Batched syscalls</td> <td>Per packet</td> <td>Event model to</td> <td>Yes</td> </tr> <tr> <td>FlexSC [40], VOS [43]</td> <td>Shared</td> <td>None</td> <td>BSD socket</td> <td>Batched syscalls</td> <td>Per packet</td> <td>Change to use new API</td> <td>Yes</td> </tr> <tr> <td>mTCP</td> <td>Per-core</td> <td>Yes</td> <td>User-level socket</td> <td>Batched function calls</td> <td>Batched</td> <td>Socket API to mTCP API</td> <td>No (NIC driver)</td> </tr> </tbody> </table> Table 1: Comparison of the benefits of previous work and mTCP. ...introduces fundamental challenges that must be addressed—processing IPC messages, including shared memory messages, involve context-switches that are typically much more expensive than the system calls themselves [3, 29]. Our key approach is to amortize the context-switch overhead over a batch of packet-level and socket-level events. While packet-level batching [27] and system-call batching [28, 40, 43] (including socket-level events) have been explored individually, integrating the two requires a careful design of the networking stack that translates packet-level events to socket-level events and vice-versa. This paper makes two key contributions: First, we demonstrate that significant performance gain can be obtained by integrating packet- and socket-level batching. In addition, we incorporate all known optimizations, such as per-core listen sockets and load balancing of concurrent flows on multicore CPUs with receive-side scaling (RSS). The resulting TCP stack outperforms Linux and MegaPipe [28] by up to 25x (w/o SO_REUSEPORT) and 3x, respectively, in handling TCP transactions. This directly translates to application performance; mTCP increases existing applications’ performance by 33% (SSLSmaller) to 320% (lighttpd). Second, unlike other designs [23, 30], we show that such integration can be done purely at the user level in a way that ensures ease of porting without requiring significant modifications to the kernel. mTCP provides BSD-like socket and epoll-like event-driven interfaces. Migrating existing event-driven applications is easy since one simply needs to replace the socket calls to their counterparts in mTCP (e.g., accept() becomes mtcp_accept()) and use the per-core listen socket. 2.1 Limitations of the Kernel’s TCP Stack Recent studies proposed various solutions to address four major inefficiencies in the Linux TCP stack: lack of connection locality, shared file descriptor space, inefficient packet processing, and heavy system call overhead [28]. Lack of connection locality: Many applications are multi-threaded to scale their performance on multicore systems. However, they typically share a listen socket that accepts incoming connections on a well-known port. As a result, multiple threads contend for a lock to access the socket’s accept queue, resulting in a significant performance degradation. Also, the core that executes the kernel code for handling a TCP connection may be different from the one that runs the application code that actually sends and receives data. Such lack of connection locality introduces additional overhead due to increased CPU cache misses and cache-line sharing [37]. Affinity-Accept [37] and MegaPipe [28] address this issue by providing a local accept queue in each CPU core and ensuring flow-level core affinity across the kernel and application thread. Recent Linux kernel (3.9.4) also partly addresses this by introducing the SO_REUSEPORT [14] option, which allows multiple threads/processes to bind to the same port number. Shared file descriptor space: In POSIX-compliant operating systems, the file descriptor (fd) space is shared within a process. For example, Linux searches for the minimum available fd number when allocating a new socket. In a busy server that handles a large number of concurrent connections, this incurs significant overhead due to lock contention between multiple threads [20]. The use of file descriptors for sockets, in turn, creates extra overhead of going through the Linux Virtual File System (VFS), a pseudo-filesystem layer for supporting common file operations. MegaPipe eliminates this layer for sockets by explicitly partitioning the fd space for sockets and regular files [28]. Inefficient per-packet processing: Previous studies indicate per-packet memory (de)allocation and DMA overhead, NUMA-unaware memory access, and heavy data structures (e.g., sk_buff) as the main bottlenecks in processing small packets [27, 39]. To reduce the per-packet overhead, it is essential to batch process multiple packets. While many recent user-level packet I/O libraries [4, 7, 27, 39] address these problems, these libraries do not provide a full-fledged TCP stack, and not all optimizations are incorporated into the kernel. System call overhead: The BSD socket API requires frequent user/kernel mode switching when there are many short-lived concurrent connections. As shown in FlexSC [40] and VOS [43], frequent system calls can result in process state (e.g., top-level caches, branch prediction table, etc.) pollution that causes performance penalties. Previous solutions propose system call batching [28, 43] or efficient system call scheduling [40] to amortize the cost. However, it is difficult to readily apply either approach to existing applications since they often require user and/or kernel code modification due to the changes to the system call interface and/or its semantics. Table 1 summarizes the benefits provided by previous work compared to a vanilla Linux kernel. Note that there is not a single system that provides all of the benefits. ### 2.2 Why User-level TCP? While many previous designs have tried to scale the performance of TCP in multicore systems, few of them truly overcame the aforementioned inefficiencies of the kernel. This is evidenced by the fact that even the best-performing system, MegaPipe, spends a dominant portion of CPU cycles (~80%) inside the kernel. Even more alarming is the fact that these CPU cycles are not utilized efficiently; according to our own measurements, Linux spends more than 4x the cycles (in the kernel and the TCP stack combined) than mTCP does while handling the same number of TCP transactions. To reveal the significance of this problem, we profile the server’s CPU usage when it is handling a large number of concurrent TCP transactions (8K to 48K concurrent TCP connections). For this experiment, we use a simple web server (lighttpd v1.4.32 [8]) running on an 8-core Intel Xeon CPU (2.90 GHz, E5-2690) with 32 GB of memory and a 10 Gbps NIC (Intel 82599 chipsets). Our clients use ab v2.3 [15] to repeatedly download a 64B file per connection. Multiple clients are used in our experiment to saturate the CPU utilization of the server. Figure 1 shows the breakdown of CPU usage comparing four versions of the lighttpd server: a multithreaded version that harnesses all 8 CPU cores on Linux 2.6.32 and 3.10.12 [Linux], a version ported to MegaPipe [3] (MegaPipe), and a version using mTCP, our user-level TCP stack, on Linux 2.6.32 (mTCP). Note that MegaPipe adopts all recent optimizations such as per-core accept queues and file descriptor space, as well as user-level system call batching, but reuses the existing kernel for packet I/O and TCP/IP processing. Our results indicate that Linux and MegaPipe spend 80% to 83% of CPU cycles in the kernel which leaves only a small portion of the CPU to user-level applications. Upon further investigation, we find that lock contention for shared in-kernel data structures, buffer management, and frequent mode switch are the main culprits. This implies that the kernel, including its stack, is the major bottleneck. Furthermore, the results in Figure 2 show that the CPU cycles are not spent efficiently in Linux and MegaPipe. The bars indicate the relative number of transactions processed per each CPU cycle inside the kernel and the TCP stack (e.g., outside the application), normalized by the performance of Linux 2.6.32. We find that mTCP uses the CPU cycles 4.3 times more effectively than Linux. As a result, mTCP achieves 3.1x and 1.8x the performance of Linux 2.6 and MegaPipe, respectively, while using fewer CPU cycles in the kernel and the TCP stack. Now, the motivation of our work is clear. Can we design a user-level TCP stack that incorporates all existing optimizations into a single system and achieve all benefits that individual systems have provided in the past? How much of a performance improvement can we get if we build such a system? Can we bring the performance of existing packet I/O libraries to the TCP stack? --- This is the latest Linux kernel version as of this writing. We use Linux 3.1.3 for MegaPipe due to its patch availability. The goal of mTCP is to achieve high scalability on multicore systems while maintaining backward compatibility to existing multi-threaded, event-driven applications. Figure 3 presents an overview of our system. At the highest level, applications link to the mTCP library, which provides a socket API and an event-driven programming interface for backward compatibility. The two underlying components, user-level TCP stack and packet I/O library, are responsible for achieving high scalability. Our user-level TCP implementation runs as a thread on each CPU core within the same application process. The mTCP thread directly transmits and receives packets to and from the NIC using our custom packet I/O library. Existing user-level packet libraries only allow one application to access an NIC port. Thus, mTCP can only support one application per NIC port. However, we believe this can be addressed in the future using virtualized network interfaces (more details in Section 3.3). Applications can still choose to work with the existing TCP stack, provided that they only use NICs that are not used by mTCP. In this section, we first present the design of mTCP’s highly scalable lower-level components in Sections 3.1 and 3.2. We then discuss the API and semantics that mTCP provides to support applications in Section 3.3. 3 Design The use of PSIO brings the opportunity to amortize the overhead of system calls and context switches throughout the entire system, in addition to eliminating the per-packet memory allocation and DMA overhead. In PSIO, packets are received and transmitted in batches [27], amortizing the cost of expensive PCIe operations, such as DMA address mapping and IOMMU lookups. 3.2 User-level TCP Stack A user-level TCP stack naturally eliminates many system calls (e.g., socket I/O), which can potentially reduce a significant part of the Linux TCP overhead. One approach to a user-level TCP stack is to implement it completely as a library that runs as part of the application’s main thread. This “zero-thread TCP” could potentially provide the best performance since this translates costly system calls into light-weight user-level function calls. However, the fundamental limitation of this approach is that the correctness of internal TCP processing depends on the timely invocation of TCP functions from the application. In mTCP, we choose to create a separate TCP thread to avoid such an issue and to minimize the porting effort for existing applications. Figure 4 shows how mTCP interacts with the application thread. The application uses mTCP library functions that communicate with the mTCP thread via shared buffers. The access to the shared buffers is granted only through the library functions, which allows safe sharing of the internal TCP data. When a library function needs to modify the shared data, it simply places a request (e.g., write() request) to a job queue. This way, multiple requests from different flows can be piled to the job queue at each loop, which are processed in batch when the mTCP thread regains the CPU. Flow events from the mTCP thread (e.g., new the CPU core. Flow events from the mTCP thread (e.g., new connections, new data arrival, etc.) are delivered in a similar way. This, however, requires additional overhead of managing concurrent data structures and context switch between the application and the mTCP thread. Such cost is unfortunately not negligible, typically much larger than the system call overhead [29]. One measurement on a recent Intel CPU shows that a thread context switch takes 19 times the duration of a null system call [3]. In this section, we describe how mTCP addresses these challenges and achieves high scalability with the user-level TCP stack. We first start from how mTCP processes TCP packets in Section 3.2.1, then present a set of key optimizations we employ to enhance its performance in Sections 3.2.2, 3.2.3, and 3.2.4. 3.2.1 Basic TCP Processing When the mTCP thread reads a batch of packets from the NIC’s RX queue, mTCP passes them to the TCP packet processing logic which follows the standard TCP specification. For each packet, mTCP first searches (or creates) a TCP control block (tcb) of the corresponding flow in the flow hash table. As in Figure 5, if a server side receives an ACK for its SYN/ACK packet (1), the tcb for the new connection will be enqueued to an accept queue (2), and a read event is generated for the listening socket (3). If a new data packet arrives, mTCP copies the payload to the socket’s read buffer and enqueues a read event to an internal event queue. mTCP also generates an ACK packet and keeps it in the ACK list of a TX manager until it is written to a local TX queue. After processing a batch of received packets, mTCP flushes the queued events to the application event queue (4) and wakes up the application by signaling it. When the application wakes up, it processes multiple events in a single event loop (5), and writes responses from multiple flows without a context switch. Each socket’s write() call writes data to its send buffer (6), and enqueues its tcb to the write queue (7). Later, mTCP collects the tcsbs that have data to send, and puts them into a send list (8). Finally, a batch of outgoing packets from the list will be sent by a packet I/O system call, transmitting them to the NIC’s TX queue. 3.2.2 Lock-free, Per-core Data Structures To minimize inter-core contention between the mTCP threads, we localize all resources (e.g., flow pool, socket buffers, etc.) in each core, in addition to using RSS for flow-level core affinity. Moreover, we completely eliminate locks by using lock-free data structures between the application and mTCP. On top of that, we also devise an efficient way of managing TCP timer operations. **Thread mapping and flow-level core affinity:** We preserve flow-level core affinity in two stages. First, the packet I/O layer ensures to evenly distribute TCP connection workloads across available CPU cores with RSS. This essentially reduces the TCP scalability problem to each core. Second, mTCP spawns one TCP thread for each application thread and co-locates them in the same physical CPU core. This preserves the core affinity of... packet and flow processing, while allowing them to use the same CPU cache without cache-line sharing. **Multi-core and cache-friendly data structures:** We keep most data structures, such as the flow hash table, socket id manager, and the pool of `tcb` and socket buffers, local to each TCP thread. This significantly reduces any sharing across threads and CPU cores, and achieves high parallelism. When a data structure must be shared across threads (e.g., between mTCP and the application thread), we keep all data structures local to each core and use lock-free data structures by using a single-producer and single-consumer queue. We maintain write, connect, and close queues, whose requests go from the application to mTCP, and an accept queue where new connections are delivered from mTCP to the application. In addition, we keep the size of frequently accessed data structures small to maximize the benefit of the CPU cache, and make them aligned with the size of a CPU cache line to prevent any false sharing. For example, we divide `tcb` into two parts where the first-level structure holds 64 bytes of the most frequently-accessed fields and two pointers to next-level structures that have 128 and 192 bytes of receive/send-related variables, respectively. Lastly, to minimize the overhead of frequent memory allocation/deallocation, we allocate a per-core memory pool for `tcb`s and socket buffers. We also utilize huge pages to reduce the TLB misses when accessing the `tcb`s. Because their access pattern is essentially random, it often causes a large number of TLB misses. Putting the memory pool of `tcb`s and a hash table that indexes them into huge pages reduces the number of TLB misses. **Efficient TCP timer management:** TCP requires timer operations for retransmission timeouts, connections in the TIME_WAIT state, and connection keep-alive checks. mTCP provides two types of timers: one managed by a sorted list and another built with a hash table. For coarse-grained timers, such as managing connections in the TIME_WAIT state and connection keep-alive check, we keep a list of `tcb`s sorted by their timeout values. Every second, we check the list and handle any `tcb`s whose timers have expired. Note that keeping the list sorted is trivial since a newly-added entry should have a strictly larger timeout than any of those that are already in the list. For fine-grained retransmission timers, we use the remaining time (in milliseconds) as the hash table index, and process all `tcb`s in the same bucket when a timeout expires for the bucket. Since retransmission timers are used by virtually all `tcb`s whenever a data (or SYN/FIN) packet is sent, keeping a sorted list would consume a significant amount of CPU cycles. Such fine-grained event batch processing with millisecond granularity greatly reduces the overhead. **Batched Event Handling** mTCP transparently enables batch processing of multiple flow events, which effectively amortizes the context switch cost over multiple events. After receiving packets in batch, mTCP processes them to generate a batch of flow-level events. These events are then passed up to the application, as illustrated in Figure 6. The TX direction works similarly, as the mTCP library transparently batches the write events into a write queue. While the idea of amortizing the system call overhead using batches is not new [28, 43], we demonstrate that benefits similar to that of batched syscalls can be effectively achieved in user-level TCP. In our experiments with 8 RX/TX queues per 10 Gbps port, the average number of events that an mTCP thread generates in a single scheduling period is about 2,170 for both TX and RX directions (see Section 5.1). This ensures that the cost of a context switch is amortized over a large number of events. Note the fact that the use of multiple queues does not decrease the number of the events processed in a batch. **Optimizing for Short-lived Connections** We employ two optimizations for supporting many short-lived concurrent connections. **Priority-based packet queuing:** For short TCP connections, the control packets (e.g., SYN and FIN) have a critical impact on the performance. Since the control packets are mostly small-sized, they can often be delayed for a while when they contend for an output port with a large number of data packets. We prioritize control packets by keeping them in a separate list. We maintain three kinds of lists for TX as shown in Figure 5. First, a control list contains the packets that are directly related to the state of a connection such as SYN, SYN/ACK, and ACK, or FIN and FIN/ACK. We then manage ACKs for incoming data packets in an ACK list. Finally, we keep a data list to send data in the socket buffers of TCP flows. When we put actual packets in a TX queue, we first fill the packets from a control list and an ACK list, and later queue the data packets. By doing this, we prioritize important packets. to prevent short connections from being delayed by other long connections. **Lightweight connection setup:** In addition, we find that a large portion of connection setup cost is from allocating memory space for TCP control blocks and socket buffers. When many threads concurrently call `malloc()` or `free()`, the memory manager in the kernel can be easily contended. To avoid this problem, we pre-allocate large memory pools and manage them at user level to satisfy memory (de)allocation requests locally in the same thread. ### 3.3 Application Programming Interface One of our primary design goals is to minimize the porting effort of existing applications so that they can easily benefit from our user-level TCP stack. Therefore, our programming interface must preserve the most commonly used semantics and application interfaces as much as possible. To this end, mTCP provides a socket API and an event-driven programming interface. **User-level socket API:** We provide a BSD-like socket interface; for each BSD socket function, we have a corresponding function call (e.g., `accept()` becomes `mtcp_accept()`). In addition, we provide functionalities that are frequently used with sockets, e.g., `fcntl` and `ioctl`, for setting the socket as nonblocking or getting/setting the socket buffer size. To support various applications that require inter-process communication using `pipe()`, we also provide `mtcp_pipe()`. The socket descriptor space in mTCP (including the fds of `pipe()` and `epoll()`) is local to each mTCP thread; each mTCP socket is associated with a thread context. This allows parallel socket creation from multiple threads by removing lock contention on the socket descriptor space. We also relax the semantics of `socket()` such that it returns any available socket descriptor instead of the minimum available fd. This reduces the overhead of finding the minimum available fd. **User-level event system:** We provide an `epoll()`-like event system. While our event system aggregates the events from multiple flows for batching effects, we do not require any modification in the event handling logic. Applications can fetch the events through `mtcp_epoll_wait()` and register events through `mtcp_epoll_ctl()`, which correspond to `epoll_wait()` and `epoll_ctl()` in Linux. Our current `mtcp_epoll()` implementation supports events from mTCP sockets (including listening sockets) and pipes. We plan to integrate other types of events (e.g., timers) in the future. ### 4 Implementation We implement 11,473 lines of C code (LoC), including packet I/O, TCP flow management, user-level socket API and event system, and 552 lines of code to patch the PSIO library. For threading and thread synchronization, we use `pthread`, the standard POSIX thread library. Our TCP implementation follows RFC793 [17]. It supports basic TCP features such as connection management, reliable data transfer, flow control, and congestion control. For reliable transfer, it implements cumulative acknowledgment, retransmission timeout, and fast retransmission. mTCP also implements popular options such as timestamp, Maximum Segment Size (MSS), and window scaling. For --- 4This optimization can potentially make the system more vulnerable to attacks, such as SYN flooding. However, existing solutions, such as SYN cookies, can be used to mitigate the problem. 5The number is counted by SLOCCount 2.26. We ported four different applications to mTCP. Because its API is based on the I/O completion model, threading, a total of specific event and socket function calls. For multi-threading to mTCP. We changed only threading to support a per-core listen socket and ported sourced single-threaded web server that uses event-driven 4.1 mTCP Socket API Our BSD-like socket API takes on per-thread semantics. Each mTCP socket function is required to have a context, mctx_t, which identifies the corresponding mTCP thread. Our event notification function, mtcp_epoll, also enables easy migration of existing event-driven applications. Listing 1 shows an example mTCP application. ```c mctx_t mctx = mtcp_create_context(); ep_id = mtcp_epoll_create(mctx, N); mtcp_listen(mctx, listen_id, 4096); while (1) { n = mtcp_epoll_wait(mctx, ep_id, events, N, -1); for (i = 0; i < n; i++) { sockid = events[i].data.sockid; if (events[i].events == EPOLLIN) { c = mtcp_accept(mctx, listen_id, NULL); mtcp_setsock_nonblock(mctx, c); ev.events = EPOLLIN | EPOLLOUT; ev.data.sockid = c; mtcp_epoll_ctl(mctx, ep_id, EPOLL_CTL_ADD, c, &ev); } else if (events[i].events == EPOLLIN) { r = mtcp_read(mctx, sockid, buf, LEN); if (r == 0) mtcp_close(mctx, sockid); } else if (events[i].events == EPOLLOUT) { mtcp_write(mctx, sockid, buf, len); } } } ``` List 1: Sample mTCP application. mTCP supports mtcp_getsockopt() and mtcp_setsockopt() for socket options, and mtcp_readv() and mtcp_writev() for scatter-gather I/O as well. 4.2 Porting Existing Applications We ported four different applications to mTCP. Web server (lighttpd-1.4.32): Lighttpd is an open-sourced single-threaded web server that uses event-driven I/O for servicing client requests. We enabled multi-threading to support a per-core listen socket and ported it to mTCP. We changed only ~65 LoC to use mTCP-specific event and socket function calls. For multi-threading, a total of ~800 lines were modified out of lighttpd’s ~40,000 LoC. We also ported lighttpd to MegaPipe for comparison. Because its API is based on the I/O completion model, the porting required more effort as it involved revamping lighttpd’s event-based fdevent backend library; an additional 126 LoC were required to enable MegaPipe I/O from the multi-threaded version. Apache benchmarking tool (ab-2.3): ab is a performance benchmarking tool that generates HTTP requests. It acts as a client to measure the performance of a Web server. Scaling its performance is important because saturating a 10 Gbps port with small transactions requires multiple machines that run ab. However, with mTCP we can reduce the number of machines by more than a factor of 4 (see Section 5.3). Porting ab was similar to porting lighttpd since ab is also single-threaded. However, ab uses the Apache Portable Runtime (APR) library [16] that encapsulates socket function calls, so we ported the APR library (version 1.4.6) to use mTCP. We modified 29 lines of the APR library (out of 66,493 LoC), and 503 lines out of 2,319 LoC of the ab code for making it multi-threaded. SSL reverse proxy (SSLSHader): SSLSHader is a high-performance SSL reverse proxy that offloads crypto operations to GPUs [32]. For small-file workloads, SSLSHader reports the performance bottleneck in TCP, spending over 60% of CPU cycles in the TCP stack, under-utilizing the GPU. Porting SSLSHader to mTCP was straightforward since SSLSHader was already multi-threaded and uses epoll() for event notification. Besides porting socket function calls, we also replace pipe() with mtcp_pipe(), which is used to notify the completion of crypto operations by GPU threads. Out of 6,618 lines of C++ code, only 43 lines were modified to use mTCP. It took less than a day to port to mTCP and to finish basic testing and debugging. Realistic HTTP replay client/server (WebReplay): WebReplay is a pair of client and server programs that reproduces realistic HTTP traffic based on the traffic log collected at a 10 Gbps backhaul link in a large cellular network [45]. Each line in the log has a request URL, a response size, start and end timestamps, and a list of SHA1 hashes of the 4KB content chunks of the original response. Using the content hashes, the server dynamically generates a response that preserves the redundancy in the original traffic; the purpose of the system is to reproduce Web traffic with a similar amount of redundancy as the original. Using this, one can test the correctness and performance of network redundancy elimination (NRE) systems that sit between the server and the client. To simulate the traffic at a high speed, however, the WebReplay server must handle 100Ks of concurrent short connections, which requires high TCP performance. WebReplay is multi-threaded and uses the libevent library [6] which in turn calls epoll() for event notification. Porting it to mTCP was mostly straightforward in --- 8 Some global variables had to be localized to avoid race conditions. that it only required replacing the socket and libevent calls with the corresponding mTCP API. We modified 44/37 LoC out of 1,703/1,663 lines of server and client code, respectively. 5 Evaluation We answer three questions in this section: 1. **Handling short TCP transactions**: Does mTCP provide high-performance in handling short transactions? In Section 5.1, we show that mTCP outperforms MegaPipe and Linux (w/o SO_REUSEPORT) by 3x and 25x, respectively; mTCP connection establishment alone is 13x and 5x faster than Linux and MegaPipe, respectively. 2. **Correctness**: Does mTCP provide correctness without introducing undesirable side-effects? Section 5.2 shows that mTCP provide fairness and does not introduce long latency. 3. **Application performance**: Does mTCP benefit real applications under realistic workloads? In Section 5.3, we show that mTCP increases the performance of various applications running realistic workload by 33% to 320%. **Experiment Setup**: We compare mTCP on Linux 2.6.32 with the TCP stack on the latest Linux kernel (version 3.10.12, with and without SO_REUSEPORT) as well as MegaPipe on Linux 3.1.3. We use a machine with one 8-core CPU (Intel Xeon E5-2690 @ 2.90 GHz), 32 GB RAM, and an Intel 10 GbE NIC as a server, and use up to 5 clients of the same type to saturate the server. While mTCP itself does not depend on the kernel version, the underlying PSIO library currently works on Linux 2.6.32. For Linux, we use ixgbe-3.17.3 as the NIC driver. 5.1 Handling Short TCP Transactions **Message benchmark**: We first show mTCP’s scalability with a benchmark for a server sending a short message as a response. All servers are multi-threaded with a single listening port. Our workload generates a 64 byte message per connection, unless otherwise specified. The performance result is averaged over a one minute period in each experiment. Figure 7 shows the performance as a function of the number of CPU cores, the number of messages per connection (MPC), and message size. Figure 7(a) shows that mTCP scales almost linearly with the number of CPU cores. Linux without SO_REUSEPORT (“Linux”) shows poor scaling due to the shared accept queue, and Linux with SO_REUSEPORT (“REUSEPORT”) scales but not linearly with the number of cores. At 8 cores, mTCP shows 25x, 5x, 3x higher performance over Linux, REUSEPORT, and MegaPipe, respectively. Figure 7(b) shows that the mTCP’s benefit still holds even when persistent connections are used. mTCP scales as well as the number of messages per connection (MPC) increases, and it nearly saturates the 10G link from 64 MPC. However, the performance of the other systems almost flattens out well below the link capacity. Even at 32 MPC, mTCP outperforms all others by a significant margin (up to 2.7x), demonstrating mTCP’s effectiveness in handling small packets. Finally, Figure 7(c) shows the throughput by varying the message size. mTCP’s performance improvement is more noticeable with small messages, due to its fast processing of small packets. However, both Linux servers fail to saturate the 10 Gbps link for any message size. MegaPipe saturates the link from 4KiB, and mTCP can saturate the link from 1KiB messages. **Connection accept throughput**: Figure 8 compares connection throughputs of mTCP and Linux servers. The... server is in a tight loop that simply accepts and closes new connections. We close the connection by sending a reset (RST) to prevent the connection from lingering in the TIME_WAIT state. To remove the bottleneck from the shared fd space, we add ‘Multiprocess’ which is a multi-process version of the REUSEPORT server. mTCP shows 13x, 7.5x, 5x performance improvement over Linux, REUSEPORT, and Multiprocess, respectively. Among the Linux servers, the multi-process version scales the best while other versions show a sudden performance drop at multiple cores. This is due to the contention on the shared accept queue as well as shared fd space. However, Multiprocess shows limited scaling, due to the lack of batch processing and other inefficiencies in the kernel. 5.2 Fairness and Latency Fairness: To verify the throughput fairness among mTCP connections, we use ab to generate 8K concurrent connections, each downloading a 10 MiB file to saturate a 10 Gbps link. On the server side, we run lighttpd with mTCP and Linux TCP. We calculate Jain’s Fairness Index with the (average) transfer rate of each connection. As the value gets closer to 1.0, it shows better fairness. We find that Linux and mTCP show 0.973 and 0.999, respectively. mTCP effectively removes the long tail in the response time distribution, whereas Linux often drops SYN packets and enters a long timeout. Latency: Since mTCP relies heavily on batching, one might think it may introduce undesirably long latency. Table 2 shows the latency breakdown when we run ab with 8K concurrent connections against the 64B message server. We generate 10 million requests in total. Linux and mTCP versions respectively achieve 45K and 428K transactions per second on average. As shown in the table, mTCP slightly increases the minimum (9 ms vs. 0 ms) and the median (13 ms vs. 3 ms) response times. However, the mean and maximum response times are 8.8x and 54.2x smaller than those of Linux, while handling 9.5x more transactions/sec. In addition, the standard deviation of the response times in mTCP is much smaller, implying that mTCP produces more predictable response times, which is becoming increasingly important for modern datacenter applications [33]. <table> <thead> <tr> <th></th> <th>Min</th> <th>Mean</th> <th>Max</th> <th>Stddev</th> </tr> </thead> <tbody> <tr> <td>Connect</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Linux</td> <td>0</td> <td>36</td> <td>63,164</td> <td>511.6</td> </tr> <tr> <td>mTCP</td> <td>0</td> <td>1</td> <td>500</td> <td>1.1</td> </tr> <tr> <td>Processing</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Linux</td> <td>0</td> <td>87</td> <td>127,323</td> <td>3,217</td> </tr> <tr> <td>mTCP</td> <td>1</td> <td>13</td> <td>2,323</td> <td>9.7</td> </tr> <tr> <td>Total</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>Linux</td> <td>0</td> <td>124</td> <td>127,323</td> <td>3,258</td> </tr> <tr> <td>mTCP</td> <td>9</td> <td>14</td> <td>2,348</td> <td>9.8</td> </tr> </tbody> </table> Table 2: Distribution of response times (ms) for 64B HTTP messages for 10 million requests (8K concurrency). 5.3 Application Performance We now demonstrate the performance improvement for existing applications under realistic workloads. lighttpd and ab: To measure the performance of lighttpd in a realistic setting, we use the static file workload extracted from SpecWeb2009 and compare the performance of different lighttpd versions ported to use mTCP, MegaPipe, and Linux with and without SO_REUSEPORT. Figure 9 shows that mTCP improves the throughput of lighttpd by 3.2x, 2.2x, 1.5x over Linux, REUSEPORT, and MegaPipe, respectively. Even though the workload fits into the memory, we find that heavy system calls for VFS operations limit the performance. We now show the performance of ab. Figure 10 shows the performance of Linux-based and mTCP-based ab when varying the number of CPU cores from 1 to 8 using a 64 byte file over HTTP. The scalability of Linux is limited, since it shares the fd space across multiple threads. Figure 10 shows the performance of ab as a function of the number of cores. The file size is 64B and 8K concurrent connections are used. At the same time, mTCP’s event-driven system saves CPU cycles. When testing mTCP with long-lived connections (not shown in the figure), we find that it consumes more CPU cycles than Linux. mTCP shows a CPU utilization of 294% compared to 80% for Linux-3.10.12 when serving 8,000 concurrent connections, each transferring a 100 MiB file. This is because we did not fully utilize modern NIC features, such as TCP checksum offload, large segmentation offload (LSO), and large receive offload (LRO). However, we believe that mTCP can easily incorporate these features in the future. **SSLShader:** We benchmark the performance of the SSLShader with one NVIDIA GPU (Geforce GTX 580) on our server. We use mTCP-based lighttpd as a server and ab as a client. On a separate machine, we run SSLShader as a reverse proxy to handle HTTPS transactions. SSLShader receives an HTTPS request from ab and decrypts the request. It then fetches the content from lighttpd in plaintext, encrypts the response using SSL, and sends it back to the client. We use 1024-bit RSA, 128-bit-AES, and HMAC-SHA1 as the cipher suite, which is widely used in practice. To measure the performance of SSL handshakes, we have ab to fetch 1-byte objects through SSLShader while varying the number of concurrent connections. Figure 12 shows that mTCP improves the performance over the Linux version by 18% to 33%. As the concurrency increases, the benefit of mTCP grows, since mTCP scales better with a large number of concurrent connections. Figure 13 indicates that mTCP also reduces the response times compared to the Linux version. Especially, mTCP reduces the tail in the response time distribution over large concurrent connections with a smaller variance, as is also shown in Section 5.2. **WebReplay:** We demonstrate that mTCP improves the performance of a real HTTP traffic replayer. We focus on the server’s performance improvement because it performs more interesting work than the client. To fully utilize the server, we use four 10 Gbps ports and connect each port to a client. The workload (HTTP requests) generated by the clients is determined by the log captured at a cellular backhaul link [45]. We replay the log for three minutes at a peak time (at 11 pm on July 7, 2012) during the measurement period. The total number of requests within the timeframe is 2.8 million with the median and average content size as 1.7 KB and 40 KB. Table 4 summarizes the workload that we replay. Unfortunately, we note that the traces we replay do not simulate the original traffic perfectly since a longer log is required to effectively simulate idle connections. Actually, the original traffic had as much as 270K concurrent connections with more than 1 million TCP connections created per minute. To simulate such a load, we run multiple copies of the same log concurrently for this experiment. Table 3 compares the averages of extra delays from the original response times when we replay n copies of the log concurrently with Linux and mTCP-based WebReplayer. We find that the Linux server works fine up to the concurrent run of three copies, but the average extra delay goes up beyond 1 second at four copies. In contrast, mTCP server finishes up to seven copies while keeping the average extra delay under 100 ms. The main cause for the delay inflation in the Linux version is the increased number of concurrent TCP transactions, which draws the bottleneck in the TCP stack. ### 6 Related Work We briefly discuss previous work related to mTCP. **System call and I/O batching:** Frequent system calls are often the performance bottleneck in busy servers. TCP congestion control parameters. They focus mostly on multicore systems [19, 20]. Barrels fish [19] and fos [44] separate the kernel resources for each core by building an independent system that manages per-core resources. For efficient inter-core communication, they use asynchronous message passing. Corey [20] attempts to address the resource sharing problem on multicore systems by having the application explicitly declare shared and local resources across multiple cores. It enforces the default policy of having private resources for a specific core to minimize unnecessary contention. mTCP borrows the concept of per-core resource management from Barrels fish, but allows efficient sharing between application and mTCP threads with lock-free data structures. Microkernels: The microkernel approach bears similarity with mTCP in that the operating system’s services run within the user level [23, 30, 38]. Exokernel [23], for example, provides a minimal kernel and low-level interfaces for accessing hardware while providing protection. It exposes low-level hardware access directly to the user level so that applications perform their own optimizations. This is conceptually similar to mTCP’s packet I/O library that directly accesses the NIC. mTCP, however, integrates flow-level and packet-level event batch processing to amortize the context switch overhead, which is often a critical bottleneck for microkernels. 7 Conclusion mTCP is a high-performance user-level TCP stack designed for multicore systems. We find that the Linux kernel still does not efficiently use the CPU cycles in processing small packets despite recent improvements, and this severely limits the scalability of handling short TCP connections. mTCP unleashes the TCP stack from the kernel and directly delivers the benefit of high-performance packet I/O to the transport and application layer. The key enabler is transparent and bi-directional batching of packet- and flow-level events, which amortizes the context switch overhead over a batch of events. In addition, the use of lock-free data structures, cache-aware thread placement, and efficient per-core resource management contributes to mTCP’s performance. Finally, our evaluation demonstrates that porting existing applications to mTCP is trivial and mTCP improves the performance of existing applications by up to 320%. Acknowledgement We would like to thank our shepherd George Porter and anonymous reviewers from NSDI 2014 for their valuable comments. We also thank Sangjini Han for providing the MegaPipe source code, and Sunil Pedapudi and Jaeheung Surh for proofreading the final version. This research is supported by the National Research Foundation of Korea (NRF) grant #2012R1A1A1015222 and #2013R1A1A1076024. References conference on Operating Systems Design and Implementation (OSDI), 2012.
{"Source-Url": "http://an.kaist.ac.kr/~shinae/paper/2014-mtcp.pdf", "len_cl100k_base": 10741, "olmocr-version": "0.1.48", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 52075, "total-output-tokens": 13552, "length": "2e13", "weborganizer": {"__label__adult": 0.0004322528839111328, "__label__art_design": 0.00037741661071777344, "__label__crime_law": 0.0003757476806640625, "__label__education_jobs": 0.0004856586456298828, "__label__entertainment": 0.00017654895782470703, "__label__fashion_beauty": 0.00018393993377685547, "__label__finance_business": 0.0003516674041748047, "__label__food_dining": 0.00043392181396484375, "__label__games": 0.001033782958984375, "__label__hardware": 0.006130218505859375, "__label__health": 0.0007510185241699219, "__label__history": 0.0005216598510742188, "__label__home_hobbies": 0.00010776519775390624, "__label__industrial": 0.0008282661437988281, "__label__literature": 0.00023317337036132812, "__label__politics": 0.0003371238708496094, "__label__religion": 0.0006380081176757812, "__label__science_tech": 0.31884765625, "__label__social_life": 9.256601333618164e-05, "__label__software": 0.0245819091796875, "__label__software_dev": 0.6416015625, "__label__sports_fitness": 0.00043892860412597656, "__label__transportation": 0.0009126663208007812, "__label__travel": 0.0003120899200439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54369, 0.03284]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54369, 0.13506]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54369, 0.87141]], "google_gemma-3-12b-it_contains_pii": [[0, 4043, false], [4043, 9341, null], [9341, 13848, null], [13848, 15552, null], [15552, 20085, null], [20085, 25052, null], [25052, 28470, null], [28470, 33603, null], [33603, 36930, null], [36930, 40882, null], [40882, 44514, null], [44514, 47283, null], [47283, 50951, null], [50951, 54369, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4043, true], [4043, 9341, null], [9341, 13848, null], [13848, 15552, null], [15552, 20085, null], [20085, 25052, null], [25052, 28470, null], [28470, 33603, null], [33603, 36930, null], [36930, 40882, null], [40882, 44514, null], [44514, 47283, null], [47283, 50951, null], [50951, 54369, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54369, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54369, null]], "pdf_page_numbers": [[0, 4043, 1], [4043, 9341, 2], [9341, 13848, 3], [13848, 15552, 4], [15552, 20085, 5], [20085, 25052, 6], [25052, 28470, 7], [28470, 33603, 8], [33603, 36930, 9], [36930, 40882, 10], [40882, 44514, 11], [44514, 47283, 12], [47283, 50951, 13], [50951, 54369, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54369, 0.08929]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
418bbdb8ad6260899007086c1efed237f53b7cc9
[REMOVED]
{"Source-Url": "http://deploy-eprints.ecs.soton.ac.uk/372/1/AE-final-2012-main.pdf", "len_cl100k_base": 9497, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 49893, "total-output-tokens": 10814, "length": "2e13", "weborganizer": {"__label__adult": 0.00032520294189453125, "__label__art_design": 0.0005383491516113281, "__label__crime_law": 0.00038552284240722656, "__label__education_jobs": 0.0012884140014648438, "__label__entertainment": 5.930662155151367e-05, "__label__fashion_beauty": 0.00015783309936523438, "__label__finance_business": 0.00035190582275390625, "__label__food_dining": 0.00039505958557128906, "__label__games": 0.0005383491516113281, "__label__hardware": 0.0008893013000488281, "__label__health": 0.000518798828125, "__label__history": 0.0003101825714111328, "__label__home_hobbies": 0.00012767314910888672, "__label__industrial": 0.0007953643798828125, "__label__literature": 0.0003635883331298828, "__label__politics": 0.00027680397033691406, "__label__religion": 0.0005288124084472656, "__label__science_tech": 0.054168701171875, "__label__social_life": 9.781122207641602e-05, "__label__software": 0.00635528564453125, "__label__software_dev": 0.93017578125, "__label__sports_fitness": 0.00027298927307128906, "__label__transportation": 0.0006928443908691406, "__label__travel": 0.0002124309539794922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38025, 0.02304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38025, 0.4302]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38025, 0.86849]], "google_gemma-3-12b-it_contains_pii": [[0, 2556, false], [2556, 4842, null], [4842, 8150, null], [8150, 10352, null], [10352, 13229, null], [13229, 16626, null], [16626, 20477, null], [20477, 24023, null], [24023, 26334, null], [26334, 28345, null], [28345, 30975, null], [30975, 32990, null], [32990, 34973, null], [34973, 38025, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2556, true], [2556, 4842, null], [4842, 8150, null], [8150, 10352, null], [10352, 13229, null], [13229, 16626, null], [16626, 20477, null], [20477, 24023, null], [24023, 26334, null], [26334, 28345, null], [28345, 30975, null], [30975, 32990, null], [32990, 34973, null], [34973, 38025, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38025, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38025, null]], "pdf_page_numbers": [[0, 2556, 1], [2556, 4842, 2], [4842, 8150, 3], [8150, 10352, 4], [10352, 13229, 5], [13229, 16626, 6], [16626, 20477, 7], [20477, 24023, 8], [24023, 26334, 9], [26334, 28345, 10], [28345, 30975, 11], [30975, 32990, 12], [32990, 34973, 13], [34973, 38025, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38025, 0.05238]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
7b0fcbaa188f634747452c3189cc11db9922467d
Diploma Thesis ruby-root Extending ROOT’s functionality with a Ruby interpreter interface Elias Athanasopoulos* elathan@phys.uoa.gr UA/PHYS/HEP/2-2-2005 *University Of Athens, HEPA Lab Contents 1 Fundamental Introduction 4 1.1 High Energy Physics 4 1.2 Neutrino Physics 4 1.3 Neutrino Oscillations 6 2 Data Analysis in HEP 7 2.1 Introduction to ROOT 7 3 The MINOS Experiment 8 3.1 MINOS Architecture 8 3.2 Aims and Goals 9 3.3 MINOS Software 10 4 Extending ROOT functionality 10 4.1 Introduction to Ruby 10 4.2 Introduction to ruby-root 11 4.3 Installing ruby-root 11 4.4 Using ruby-root 12 5 Understanding ruby-root internals 13 5.1 Extending Ruby 13 5.2 Understanding ROOT dictionaries 15 5.3 mini ruby-root 15 5.4 ruby-root compiler 16 5.5 Complex issues 16 6 Dynamic ruby-root 17 6.1 Introduction to dynamic ruby-root 17 6.2 Dynamic ruby-root internals 18 6.3 The ROOT Ruby module 19 6.4 Configuration 19 6.4.1 Building and installing the Ruby module 20 6.4.2 Setting up the environment 20 6.4.3 Running ROOT scripts from Ruby 20 6.4.4 Invoking the Ruby module from ROOT/CINT interpreter 21 6.5 Current status 22 7 Speed Comparison 7.1 Ordinary ruby-root ........................................ 22 7.2 Ruby module vs PyROOT ................................... 23 8 Case Study 8.1 Case Study Description .................................... 24 8.2 Case Study Implementation ................................. 25 9 Acknowledgements ........................................... 28 10 Appendices .................................................... 28 A References ..................................................... 28 B Migrating from C/C++ to ruby-root ....................... 28 B.1 Constructors ................................................ 29 B.2 Method Calling ............................................. 29 B.3 TApplication ............................................... 30 B.4 C++ Explicit Casts ....................................... 30 B.5 ROOT Collections ......................................... 31 B.6 #to_ary ....................................................... 31 B.7 C++ Enumerations ......................................... 31 B.8 C++ Globals ................................................ 32 B.9 C++ References ............................................ 32 B.10 Function Pointers ......................................... 32 B.11 ROOT Trees and TTree#via ............................. 33 B.12 Floating values and arithmetic .......................... 33 B.13 Boolean checks ........................................... 34 C Scripts .......................................................... 34 C.1 Benchmark Scripts ........................................ 34 C.2 Case Study Script ......................................... 39 Abstract ruby-root aims on providing Ruby bindings for the ROOT Object Oriented Framework. ROOT is a very popular software solution for data analysis in the field of High Energy Physics. Using ruby-root you can have the basic functionality ROOT provides via Ruby; a powerful modern scripting language. 1 Fundamental Introduction 1.1 High Energy Physics High Energy Physics (HEP) is considered the most active field in Experimental Physics. The aim of HEP is to identify the nature of the fundamental forces and particles of our universe. The basic concept of particle physics experiments is the acceleration of particles (protons, electrons, etc) which then collide with each other. The collision can explode the internal structure of the particles, produce sub particles and give us a better picture of nature’s structure at the fundamental level. The energy of the colliding particles is critical, since higher energy means stronger collision and thus a more detailed picture of the internal particles’ structure. This is the main reason we use the term ‘High Energy’. The final goal of HEP is to construct a theory for the description of all the elementary particles and elementary forces of our world. This theory is also called ‘The Standard Model’ (see Figure 1). Currently, the standard model contains 12 elementary particles and we have knowledge about 4 fundamental interactions. The 12 elementary particles are the 6 quarks, the electron and its neutrino, the muon and its neutrino and the tau and its neutrino. The latter, the neutrino of the tau particle, was discovered in the Donut experiment. Last but not least, there is a crucial effort for the discovery of the Higgs particle, known also as the mass carrier. The discovery of the Higgs particle will give us strong feelings that the Standard Model’s theory is correct. 1.2 Neutrino Physics Neutrinos are quite strange particles in the sense that their difficulty to be observed gives them some interesting properties. Neutrinos can take part only in weak interactions, such as the well known $\beta$-decay. Actually, $\beta$-decay was the first nuclear process that guide the science community to the discovery of the neutrino particle. During a $\beta$-decay a proton becomes a neutron (or vice versa) and an electron (or positron) is emitted. In order to have momentum conservation, since the electron’s spectrum is continuous, another particle must also be emitted. That is the neutrino particle (or an antineutrino in the case where a bound neutron becomes a proton). Since $\beta$-decay is a process leaded by the weak interaction and since neutrinos can interact only in weak interaction processes, we have a major difficulty to observe and measure neutrino’s properties. For example, there is a crucial debate about neutrino’s mass. Since, we have no way to measure explicitly, but rather implicitly via the electron’s properties emitted in a $\beta$-decay, a neutrino’s mass, the only thing we can accomplish is to define some mass limits. We know that neutrino’s mass is very small, or even zero, but, there is no strong theoretical reason for the neutrino’s mass to be zero. On the other hand, there is a strong theoretical reason for the photon’s mass to be zero. <table> <thead> <tr> <th>R</th> <th>Experiment</th> </tr> </thead> <tbody> <tr> <td>0.60 +- 0.05</td> <td>Kamiokande (sub-GeV)</td> </tr> <tr> <td>0.57 +- 0.07</td> <td>Kamiokande (multi-GeV)</td> </tr> <tr> <td>0.63 +- 0.03</td> <td>Super-Kamiokande (sub-GeV)</td> </tr> <tr> <td>0.65 +- 0.05</td> <td>Super-Kamiokande (multi-GeV)</td> </tr> </tbody> </table> Table 1: The ratio R as verified by Kamiokande and Super-Kamiokande[1]. ### 1.3 Neutrino Oscillations One of the great mysteries in neutrino physics was the anomaly of the atmospheric neutrino’s flux. Large experiments, such as Kamiokande and Super-Kamiokande, were developed in order to measure the ratio of $\mu$-like to $e$-like events from cosmic rays. In order to verify our theory about the flux of neutrinos from cosmic rays, theoretical Monte-Carlo calculations was performed. So, the goal of the experiments was to verify the quantity given by: $R = (\mu/e)_{\text{data}}/(\mu/e)_{\text{MC}}$. That is the ratio of the observable events over the theoretical calculations. As it was manifestly shown by Kamiokande and Super-Kamiokande (see Table 1) the ratio is smaller than 1. That is there is an anomaly in the atmospheric neutrino’s flux and a new concept must be introduced: neutrino oscillations. That is, neutrinos have the property to interchange themselves. Thus, neutrinos of one of the three available flavors (namely $\nu_e$, $\nu_\mu$ and $\nu_\tau$) can oscillate (can change) to another flavor. Now, if we adapt the neutrino oscillations concept, there is a strong argument that neutrinos can’t be massless, because massless particles cannot oscillate. Put another way, observation of oscillation implies that the masses of the neutrinos involved cannot be equal to one another. Since they cannot be equal to one another, they cannot both be zero. In fact it is quite likely that if any neutrinos have non-zero mass, all of them do. 2 Data Analysis in HEP 2.1 Introduction to ROOT ROOT\(^1\) is a collection of libraries, implemented in C++, which aims to provide a complete solution for various scientific tasks such as data analysis for large experiments in the field of High Energy Physics. Among other experiments, ROOT is heavily used in MINOS\(^2\) experiment, in various projects at FNAL\(^3\) and at SLAC\(^4\). ROOT delivers more than 700 different C++ Classes, ideal for Linear Algebra, Function Plotting and Fitting, Histogram Presentation and many more. In addition ROOT provides Classes for system-oriented tasks, such as GUI widgets, Database Connectivity, Networking facilities and others. All the above transform ROOT to a complete framework for application building in C++. However, programming in C++ is not always trivial. Thus, ROOT comes with a tightly integrated C/C++ Interpreter, CINT\(^5\). CINT can interpret, or compile if speed is an issue, scripts written in C/C++. CINT can evaluate C/C++ code at run-time and resolve Class information using a Class dictionary. The latter stands for a special file, which describes a Class’ behavior. ROOT comes with special tools for generating at will Class dictionaries. Thus, every Class which has the appropriate generated dictionary can be accessible by any C/C++ script, executed via CINT. In addition, the dictionary technology enhances the ROOT framework with full RTTI (Run-Time Type Information) support. There are ROOT Classes (TClass, TMethod to name a few) which provide the user information about a Class’ behavior, inheritance tree, member functions available, etc. This feature is vital in the development of ruby-root. \(^1\)http://root.cern.ch \(^2\)http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Companion/index.html \(^3\)http://www-cpd.fnal.gov/CPD/root/ \(^5\)http://root.cern.ch/root/Cint.html 3 The MINOS Experiment The MINOS (Main Injector Neutrino Oscillation Search) is a first generation, long baseline neutrino oscillation experiment. MINOS is designed to make a precise study of the "atmospheric" neutrino oscillations observed recently by underground experiments. That is, the main purpose of the MINOS experiment is to identify neutrino oscillations and if so to measure with great precision the oscillations parameters. 3.1 MINOS Architecture MINOS consists of two detectors, namely the Near one and the Far one detector and it uses the NuMI neutrino beam. The two detectors are located at distance of 1 km and 735 km from the neutrino source, respectively. The Near detector weights 980 tons, while the Far one weights 5400 tons[2]. The MINOS experiment will use neutrinos produced in the NuMI beam line by 120 GeV protons. The beam is generated by the Main Injector at Fermilab in a fast extraction mode (10 $\mu$s). The proton beam is aimed at the Soudan mine in northern Minnesota, where the Far detector is located. Because of the earth's curvature the parent hadron beam has to be pointed at an angle of 57 mrad. The resulting hadron beam is focused by specially designed focusing elements. It travels via a two-magnetic horn system followed by a 700 m long decay pipe and muon absorber to produce finally the $\nu_\mu$ beam. In more detail, the hadron beam is focused and transported through an evacuated decay pipe, 1 m in radius and 675 m long, before striking a secondary hadron absorber downstream. The total decay length is 725 m. The dolomite between the hadron absorber and the Near detector provides sufficient shielding to range out all the muons produced by $\pi$ and $K$ in the beam pipe. As already has been stated, the MINOS experiment utilizes two detectors with the basic structure of a segmented iron-scintillator calorimeter and magnetized muon spectrometer. The use of two detectors is dictated by the need to measure neutrino disappearance and to control systematics, while the primary function of the near detector is to serve as a reference for the main MINOS detector; the far one. The fact that both the near and far detector have been constructed as similar as possible is vital. That is, MINOS tries to measure the neutrinos’ behavior before (Near) and after (Far) they have the chance to oscillate. The Table 2: MINOS experimental parameters with the wide-band (PH2) beam[1]. <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Near detector mass</td> <td>0.98 (metric) kt total, 0.1 kt fiducial</td> </tr> <tr> <td>Far detector mass (2 supermodules)</td> <td>5.4 (metric) kt total, 3.3 kt fiducial</td> </tr> <tr> <td>Steel planes (far detector)</td> <td>8-m wide, 2.54-cm thick octagons</td> </tr> <tr> <td>Magnetic field (far detector)</td> <td>Toroidal, 1.5 T at 2 m radius</td> </tr> <tr> <td>Active detector planes</td> <td>Extruded polystyrene scintillator strips</td> </tr> <tr> <td>Active detector strips</td> <td>4.1-cm wide, 1-cm thick, \sim 8-m long</td> </tr> <tr> <td>Near detector distance from decay pipe</td> <td>290 m</td> </tr> <tr> <td>Far detector distance from decay pipe</td> <td>730 km</td> </tr> <tr> <td>Cosmic ray rates</td> <td>270 Hz in near det., 1 Hz in far det.</td> </tr> <tr> <td>Neutrino energy range (3 configurations)</td> <td>1 to 25 GeV</td> </tr> <tr> <td>Detector energy scale calibration</td> <td>5% absolute, 2% near-far</td> </tr> <tr> <td>Detector EM energy scale calibration</td> <td>\frac{23%}{\sqrt{E}} (&lt;5% constant term)</td> </tr> <tr> <td>Detector hadron energy resolution</td> <td>\frac{60%}{\sqrt{E}} (&lt;7% constant term)</td> </tr> <tr> <td>Detector muon energy resolution</td> <td>(&lt;12%{(\text{from curvature or range})})</td> </tr> <tr> <td>NC-CC event separation</td> <td>Efficiency (&gt;90%), correctable to (99.5%)</td> </tr> <tr> <td>Electron/(\pi) separation</td> <td>Hadron rejection (\sim 10^3) for (\epsilon_e \sim 20%)</td> </tr> <tr> <td>Far det. (\nu) event rate (high-energy beam)</td> <td>3000 (\nu_{\mu}) CC events/kt/yr (no oscillations)</td> </tr> <tr> <td>Near det. (\nu) event rate (high-energy beam)</td> <td>20 events/spill in target region</td> </tr> <tr> <td>Near-far relative rate uncertainty</td> <td>20%</td> </tr> </tbody> </table> results of the two detectors then can be compared to see if oscillations have been occurred. Having as a reference a second, almost similar technically, detector (Near) the results of the main Far detector may not be compared with Monte Carlo predictions but with real data, which is one of the great advantages of MINOS. Table 1 summarizes some technical properties of the MINOS architecture. 3.2 Aims and Goals Briefly, the physics goals of MINOS[1] are: a) If Nature has chosen not to have neutrino oscillations in the parameter space accessible to MINOS, we want to be able to demonstrate this fact convincingly over as large an area in oscillation parameter space as possible. b) If oscillations do exist in the space accessible to MINOS, we want to convincingly demonstrate their existence, measure the oscillation parameters with high precision, and determine the oscillation modes. Specifically, we want to ensure that we can cover the full region of parameter space suggested by Super-Kamiokande experiment. 3.3 MINOS Software MINOS is a demanding modern experiment in High Energy Physics and a rich framework for data analysis must be used in the construction of the MINOS applications. Thus, the MINOS software group decided to build the whole software for MINOS data analysis using the ROOT framework. Although the great majority of the MINOS source code which has already been implemented and will probably be extended in the near future is written in C++, ROOT scripts in a pure scripting language, such as Ruby or Python, might be very handy, especially in every-day jobs. Tasks as it is the collection of data from remote hosts or data analysis test-cases can easily developed using Ruby or Python. Since, ROOT’s support for Ruby and Python is native, scripts written in Ruby or Python can interact with major C++ applications and exchange data. Also, the ability of writing pluggins in Ruby or Python for the central MINOS Software Unit if a specific plugin-technology is developed must be examined. 4 Extending ROOT functionality 4.1 Introduction to Ruby Ruby\textsuperscript{6} is a modern scripting language with over 10 years of development. Ruby combines features from Python\textsuperscript{7}, Perl\textsuperscript{8} and Smalltalk\textsuperscript{9}, as well as modern ideas from the field of Programming Languages. Ruby delivers a clean syntax and a full dynamic and Object Oriented nature. Using Ruby one can \begin{footnotesize} \begin{itemize} \item \textsuperscript{6}http://www.ruby-lang.org \item \textsuperscript{7}http://www.python.org \item \textsuperscript{8}http://www.perl.org \item \textsuperscript{9}http://www.smalltalk.org \end{itemize} \end{footnotesize} write easily few-lines scripts that can cope with complex system tasks, such as opening files and manipulating their contents, matching patterns with regular expressions, connecting to services using sockets and many more. Ruby is full dynamic typed, so the user is not dealing with complex definitions and prototyping. Its syntax is quite human-oriented, so as users with minimal Computer Science experience can learn its syntax and grammar very easily. In addition, Ruby embeds some powerful pre-built structures, like Arrays, Hashes and Strings, making programming quite productive. The idea of extending Ruby and exporting ROOT’s basic functionality to Ruby looked ideal and thus the idea of ruby-root was born. 4.2 Introduction to ruby-root ruby-root\(^{10}\) is a compact solution for bridging Ruby and ROOT’s basic func- tionality. The use of ruby-root offers the user with the ability to write native Ruby scripts that can take advantages of the components that ROOT’s li- braries provide. Not all of the ROOT’s features are available to Ruby via ruby-root, but the most vital ones that a scientist depends on. Thus, most of the Computer- related parts of ROOT are not touched, since Ruby provides a rich set of similar functions; available in a native Ruby fashion. On the other hand, as it will be shown in the next sections, fundamental ROOT constructs such as Classes dealing with Collections of Objects and Strings, to name a few, are converted internally in Ruby constructs, which can then manipulated by a user quite easier. 4.3 Installing ruby-root In order to use ruby-root you must have installed Ruby and ROOT in your system. Assuming you have downloaded ruby-root and you have uncom- pressed it in a place of your choice, the installation process is the following: ``` % ruby ./extconf.rb % make # make install ``` \(^{10}\)http://null.edunet.uoa.gr/~elathan/rr/ Keep in mind that the last step needs root privileges. Since ruby-root comes with already wrapped ROOT classes of a specific ROOT version, you may have conflicts and failures in the compilation process. This means that you must rebuild ruby-root. The rebuilding process is the following: ``` % ./rebuild % make # make install ``` Rebuilding requires indent(1) installed in your system; a GNU tool to format the autogenerated C++ code. ### 4.4 Using ruby-root Since ruby-root is correctly installed in your system, you can start developing scripts in Ruby which utilize ROOT's functionality. The following is a ruby-root classic 'Hello World' program: ```ruby require 'root' tapp = TApplication.new "rr: Hello ROOT!" tc = TCanvas.new "tc", "Hello", 168, 8, 699, 499 pt = TPaveText.new 0.1, 0.1, 0.5, 0.5, "blNDC" pt.AddText 0, 0, "Hello ROOT from Ruby! :-)", "blNDC" pt.Draw tc.Modified tc.cd tapp.Run ``` Assuming the source file is named 'hello.rb', you can run it using the command: ``` % ruby hello.rb ``` 5 Understanding ruby-root internals 5.1 Extending Ruby Ruby is written in C. A rich C API (Application Programming Interface) exists, in order to extend Ruby with support of external functionality. By using the Ruby C API the creation of a library with C code known as 'Ruby extension' is straight forward. Whenever the user wants to use the new extension, he/she must use the 'require' command to load the library in a Ruby script. After the 'require' command the script is full aware of the new functionality. Actually this the reason that our ruby-root script, in the previous section had as the first line: ```ruby require 'root' ``` This line instructs Ruby to load the ruby-root extension, so as to use ROOT’s functionality via Ruby. Now, in order to extend Ruby with ROOT functionality, each ROOT class must be wrapped in Ruby using the Ruby C API. That is, for every C++ member function of a ROOT class, a C function must be created in order to call this member function in a fashion Ruby supports. This C function uses constructs that Ruby supports. Its main purpose, as we will see later in more details, is to bridge a Ruby method with a C++ ROOT method. Also, some extra C code must be implemented for the creation of Ruby classes which encapsulate the ROOT classes behavior. The following example shows the wrapped TCanvas::cd() member function. The arguments are translated to the equivalent C ones, and the TCanvas::cd() is called. ```c static VALUE cTCanvas_cd(int argc, VALUE argv[], VALUE self) { /* void TCanvas::cd(Int_t subpadnumber=0) */ VALUE arg1; rb_scan_args(argc, argv, "01", &arg1); RRCALL(self, TCanvas)->cd(NIL_P(arg1) ? (Int_t) 0 : NUM2INT(arg1)); ``` 13 Let’s explain the above snippet in more detail. The ’VALUE’ construct belongs to the Ruby C API. Almost everything in Ruby is an object and in order to manipulate it in C you have to assign a VALUE to it. That is, the only thing Ruby can understand from a C perspective is VALUEs (which in reality are pointers to more complicated structures). Now, in order to exchange data between the Ruby side and the ROOT side, all ROOT’s data must be converted to VALUEs and vice versa, depending on the call phase. In our example, the above snippet will be executed whenever Ruby tries to execute the code below (assuming that ‘c’ is a TCanvas instance): c.cd(2) That is, in the Ruby side the ’2’ parameter is a VALUE, which means that it must be converted to what the actual ROOT method expects; in our case to a C integer. So, using `rb_scan_args()`, which is another Ruby C API function, we can map the input arguments to VALUEs (in our case there is only one argument) and then use some handy Ruby macros to convert the VALUEs to the right C counterparts. In the above snippet, `NUM2INT()` is used in order to convert the VALUE ’2’ to a C integer ’2’. RRCALL() is a ruby-root macro defined in rrcommon.h: ```c #define RRCALL(obj, type) \ type *v; \ Data_Get_Struct(rb_iv_get (obj, "__rr__"), type, v); ((type *)(v)) ``` This macro, along with some other similar ones (RRCALL2(), RRMODCALL(), RRMODCALL2(), etc.) are used to make the C code nicer, since making calls between ROOT and Ruby, requires some heavy C casting. Ending this technical discussion, we note that NILP() is another Ruby macro that will check if the input VALUE is nil. There is no doubt that wrapping each ROOT class member function using the Ruby C API is a very difficult and demanding process. That is the main reason that we tried to describe in detail the wrapping of one ROOT method, which actually belongs to the family of ROOT methods that is very easy to wrap. Other methods, especially the ones that can be overloaded in the ROOT side are extremely difficult to be wrapped. Also, a small change in the ROOT interfaces requires changes in the wrapped interfaces. The whole process is difficult to be maintained by a human, so there is a need for developing a machine interface for the automatic generation of the wrapper code. ruby-root uses the ROOT dictionary technology to cope with the above task. 5.2 Understanding ROOT dictionaries ROOT uses CINT as a user-friendly interface to develop C/C++ scripts with ROOT functionality. CINT stands for a C/C++ interpreter. A C/C++ interpreter may be slower than a C/C++ compiler, but it is easier to use, since the execution phase of the user’s code is more interactive. CINT needs to have all the type information of the C/C++ source at run-time. That is, when a user executes a member function via CINT, information such as the class that the member function belongs to, the arguments that it takes, its class scope (private, public, protected) and other vital information must be known at the stage of execution; at run-time. In order CINT to cope with the above, for each C++ class the user wants available via CINT, it generates a file called dictionary. This file describes the class’ behavior. CINT contains a full featured API in order to export this information to third party programmers. Thus, anyone can have access to the dictionaries via CINT or even via ROOT (a higher level API) at runtime. ruby-root uses ROOT’s dictionaries in order to create the wrapper code. 5.3 mini ruby-root A utility that uses ROOT’s dictionaries must be constructed in order to produce automatically the wrapper code. Since Ruby is easier to use than C/C++ the idea of exporting the ROOT dictionary API to Ruby by hand and then developing a Ruby script that compiles ROOT’s prototypes to the Ruby C API is very challenging. mini ruby-root stands for a small ruby-root distribution that embeds all the necessary wrapper code for the ROOT’s classes that export the dictio- naries’ information. That is, using mini ruby-root, a Ruby script that has access to the ROOT dictionaries can be developed. 5.4 ruby-root compiler rrc stands for the ruby-root compiler. rrc is a Ruby program that aims to compile ROOT classes to the Ruby C API. As it can be easily understood, rrc is the most vital part of the ruby-root distribution. It tries to cope with all the C++ complexity and transform in a universal and generic way a number of C++ classes to the Ruby C API. rrc can cope with Multiple Inheritance, Overloaded member functions, Static member functions and a number of C++ to Ruby type conversions. The rrc program is invoked during the building of ruby-root and mini ruby-root must have been built before. The safest way to invoke rrc is via the 'rebuild' script which is part of the ruby-root distribution. 5.5 Complex issues Although a full compact solution for the automatic wrapper code generation has been developed, there are still parts of ruby-root that require hardcoding. We can refer to these cases us ‘complex issues’ since most of them deal with tasks that can not be identified by a machine program and a human’s interpretation is required. For a short example, consider the following case: Double_t *GetX() const {return fX;} The above is the prototype of the GetX() method, which belongs to the TGraph class. This prototype cannot be wrapped by any machine program, since GetX() returns an array of doubles, but nobody, except the user, knows its size in advance. That is, the prototype does not describe exactly the GetX() behavior but rather a summary of how someone expects that GetX() works. In order to resolve the exact usage of GetX() the whole TGraph class must be examined. A careful TGraph examination, unveils: Int_t GetN() const {return fNpoints;} 16 Apparently, GetN() will return the size of the array that GetX() returns, but this can only be resolved by a human that understands how TGraph works and not by a machine program that processes the prototypes. There is no way, for a program to conclude that the size of the array GetX() returns can be computed by calling GetN(). Methods like the GetX() one must be hardcoded: ```c static VALUE cTGraph_GetX (VALUE self) { VALUE arr = rb_ary_new (); double *x; RRCALL2(self, TGraph, x)->GetX (); for (int i = 0; i < v->GetN(); i++) rb_ary_push (arr, rb_float_new (x[i])); return arr; } ``` All the hardcoded methods are located in the tools/rrhardcode.rb file of the ruby-root distribution. ## 6 Dynamic ruby-root ### 6.1 Introduction to dynamic ruby-root Although ruby-root embeds a complete solution to generate Ruby C API compliant code for the ROOT classes, there is still the problem that someone must collect all the ROOT classes to be wrapped and use the rrc to generate all the needed code. The idea of wrapping all the ROOT classes sounds challenging, but the rrc is not an elegant solution. The task of compiling all the ROOT classes using rrc is very complex and the produced output is huge in size. On the other hand, it is almost impossible someone to utilize all the ROOT classes in a script or an application. That is, an elegant solution that will give the user access to every ROOT class via Ruby must be developed. The approach to solve the problem is similar to the one PyROOT\(^\text{11}\). follows. There is no wrapper code generation, but a minimal interface that tries \(^\text{11}\)http://wlav.home.cern.ch/wlav/scripting/ to resolve each ROOT method at the run-time of a Ruby script. This means that all the ROOT/C++ to Ruby and vice versa conversion will happen at run-time and there will be no wrapper code generation at compile-time. This approach has as an advantage that every Ruby script can be aware on all ROOT classes on demand. That is, whenever a user tries to construct a new instance of a ROOT class and call a method, dynamic ruby-root tries to resolve the requested method and call it using the CINT API. Due to the fact that everything is happening at run-time, dynamic ruby-root is slower than ruby-root. Also, complex issues of ruby-root can not be handled easily in the dynamic version. 6.2 Dynamic ruby-root internals In order to have the ability of bridging Ruby and ROOT at run-time two major issues must be solved. The first one is that Ruby must know in advance that a ROOT class is instantiated or a ROOT method is called. That is, in the Ruby side of the extension, all ROOT calls must be added to Ruby at run-time. For example when the user creates a new TCanvas object, Ruby must create the appropriate Ruby TCanvas class, which encapsulates the actual ROOT TCanvas class. As we said, there is no wrapper code generation at compile-time, so Ruby is completely unaware of ROOT. All the magic happens at run-time. The second issue regards to the ROOT side. Whenever the user tries to create a TCanvas object, Ruby must create the Ruby TCanvas class and call the ROOT TCanvas' constructor. Again, at run-time, we have to find a way to resolve the actual ROOT TCanvas constructor, call it, and give the result back to Ruby, as a Ruby object. The first issue is solved using Ruby’s Object#const_missing\textsuperscript{12} and Object#method_missing methods. Denote that in Ruby ‘Object’ is the fundamental Object class (i.e. everything inherits from Object). In order to solve the problem, a DRRAbstractClass is defined which inherits from Object and DRRAbstractClass#const_missing and DRRAbstractClass#method_missing is used to resolve any ROOT call at run-time. So, whenever the user tries to create a new TCanvas, the const_missing method of DRRAbstractClass is called, and we are in the phase of trying to resolve \textsuperscript{12}It is common in the Ruby world to refer to methods using the ‘\#' character, exactly as we do using ‘::’ in C++. That is, Foo#bar can be seen as the C++ idiom Foo::bar(). if TCanvas is an actual ROOT class. Finding if TCanvas is an actual ROOT class is part of the second major issue. We need the ability to resolve ROOT code at run-time. In order to accomplish this task we use CINT’s API. That is, inside the `const_missing` method we ‘ask’ CINT if a TCanvas dictionary is available. If so, we call the TCanvas constructor -again using the CINT API- and return the new pointer to the Ruby side, as a Ruby object. In a similar fashion, the `method_missing` method is called whenever the user tries to call a method of a ROOT class. That is, after the user has created his/her Ruby TCanvas and tried to call, for example, TCanvas#SetTitle, `method_missing` will grant control. It will ‘ask’ ROOT via the CINT API, if the actual ROOT TCanvas class has a member function called SetTitle() and if so it will try to execute it and give back the results to Ruby. In order to speed up things, since for every call a lot of work must be done in `const_missing` and `method_missing` of the DRRAbstractClass, dynamic ruby-root maintains an internal cash of ROOT calls. That is, if the user asks for a ROOT method, the resolving will be done once, at least in the Ruby side. Subsequence calls will be from the internal cash. An exception to this is in overloaded methods. Overloading in dynamic ruby-root is done by inspecting the input Ruby arguments and by constructing an equivalent C prototype. If calls in a specific method are done with different prototypes, that means that the cash mechanism will not be used. That is overloaded methods are treated just like different methods. It is important to note that Ruby’s ability to create Classes and methods at run-time is vital to the implementation of dynamic ruby-root. ### 6.3 The ROOT Ruby module Dynamic ruby-root has been adapted officially by the ROOT team, as the official Ruby interface of ROOT. This means that the latest versions of ROOT include dynamic ruby-root by default in the distribution. Inside ROOT, dynamic ruby-root, is called Ruby module. So, whenever we write ‘ROOT Ruby module’ we actually refer to dynamic ruby-root. In addition to dynamic ruby-root, the Ruby module contains a TRuby class that gives access to Ruby code via ROOT’s command line interface. That is, when a user uses the ROOT command line interface, he/she can execute C++ code or Ruby code via the TRuby interface. 6.4 Configuration Although, ROOT has adapted dynamic ruby-root in the official distribution, the Ruby module is not activated by default when you build ROOT from source. Below, we will describe the activation process\textsuperscript{13}. 6.4.1 Building and installing the Ruby module The Ruby extension module is not built by default when building ROOT from sources. The user should follow the standard installation instructions and enable the build of the Ruby module. Ruby version $\geq$ 1.8 is required. ``` ./configure <arch> --enable-ruby \ [--with-ruby-incdir=<dir>] \ [--with-ruby-libdir=<dir>] gmake ``` If you do not specify the inc and lib directories configure will use Ruby to grab the directories where Ruby's headers and library are located. A library called libRuby.so [libRuby.dll] will be created in the $\text{ROOTSYS}/lib [$\text{ROOTSYS}/bin]. 6.4.2 Setting up the environment To work with the Ruby module, the LD\_LIBRARY\_PATH [PATH] and RUBYLIB need to be set in addition to the standard ROOTSYS. For Unix platforms: ``` export LD_LIBRARY_PATH=$\text{ROOTSYS}/lib:$LD_LIBRARY_PATH export RUBYLIB=$\text{ROOTSYS}/lib:$RUBYLIB ``` for Windows: ``` set PATH=%\text{ROOTSYS%/bin;%PATH%} set RUBYLIB=%\text{ROOTSYS%/bin;%RUBYLIB%} ``` \textsuperscript{13}You can always find these instructions on-line at the official ROOT site: http://root.cern.ch/root/HowtoRuby.html 6.4.3 Running ROOT scripts from Ruby The user should make sure that the ruby command is the one of the installation that has been used to build the Ruby extension module. If the RUBYLIB environment variable is set correctly, the user can execute a Ruby script with ROOT functionality in the following way: ``` ruby -rlibRuby foo.rb ``` Another way is to start the Ruby script with the Ruby require command: ``` require 'libRuby' ``` An example is as follows: ``` require 'libRuby' gROOT.Reset c1 = TCanvas.new('c1', 'Example with Formula', 200, 10, 700, 500) # # Create a one dimensional function and draw it # fun1 = TF1.new('fun1', 'abs(sin(x)/x)', 0, 10) c1.SetGridx c1.SetGridy fun1.Draw c1.Update ``` The user can find a number of examples in the $ROOTSYS/tutorials. To run them you need to execute the command: ``` cd $ROOTSYS/tutorials ruby demo.rb ``` 6.4.4 Invoking the Ruby module from ROOT/CINT interpreter A ROOT user can run any Ruby command and eventually to run IRB, the Interactive Ruby Shell. The commands to execute are: ```bash root [0] gSystem>Load("libRuby"); root [1] TRuby::Exec("require '/usr/local/lib/root/libRuby'"); root [2] TRuby::Exec("c1 = TBrowser.new"); root [5] TCanvas *c2 = new TCanvas("ruby test", "test", 10, 10, 100, 100); root [6] TRuby::Bind(c2, "$c"); root [7] TRuby::Eval("puts $c.GetTitle"); test root [8] TRuby::Prompt(); root [9] TRuby::Prompt(); ``` ``` irb(main):001:0> print 1 1=> nil irb(main):002:0> ``` Notice that whenever you bind a ROOT Object in the Ruby side, you need to use a global Ruby variable, that is a variable with a leading "$". 6.5 Current status Currently, the Ruby module has been tested on the Linux platform using GCC. The whole development of ruby-root, dynamic ruby-root and the Ruby module has been done using Linux/GCC and further development will be on this platform. Thanks to Axel Naumann\textsuperscript{14}, the Ruby module can be built under cygwin in the Microsoft Windows platform. 7 Speed Comparison 7.1 Ordinary ruby-root In this section, we will discuss the speed execution of a Ruby script versus CINT (C interpreted code) and C compiled code. The Ruby script is executed \textsuperscript{14}axel@fnal.gov using static ruby-root, that means that the wrapper code has been created at compile-time and not at run-time. The benchmark script can be found in Appendix C (stress16.rb). The results are shown in Table 3. <table> <thead> <tr> <th>Real Time (secs)</th> <th>CPU Time (secs)</th> <th>Language</th> </tr> </thead> <tbody> <tr> <td>48.57</td> <td>47.10</td> <td>Ruby</td> </tr> <tr> <td>27.21</td> <td>26.28</td> <td>C Interpreted</td> </tr> <tr> <td>0.19</td> <td>0.19</td> <td>C Compiled</td> </tr> </tbody> </table> Table 3: Speed comparison for static ruby-root. As it was expected Ruby is slower than CINT and of course it can’t even been compared with C Compiled code. This is a normal result, since Ruby is an easy to use scripting language and in order to save time for the developer, it must spend additional time on doing the actual work. On the other hand, the script used for the benchmark it is unlikely that it is a realistic one, since what it does is looping in heavy calculations. Although, scientific jobs require often complex calculations, it is unlikely that a scientist will perform such a task in daily work. Last but not least, execution time is not the only thing we must measure. Deploy time is also an important factor, especially in our modern ages. Sometimes, a machine costs less than a developer’s time. ### 7.2 Ruby module vs PyROOT In this section we will show the performance of the Ruby module compared to the Python module (PyROOT). <table> <thead> <tr> <th>Real Time (secs)</th> <th>CPU Time (secs)</th> <th>Language</th> </tr> </thead> <tbody> <tr> <td>3.03</td> <td>2.65</td> <td>Ruby</td> </tr> <tr> <td>2.18</td> <td>1.85</td> <td>Python</td> </tr> </tbody> </table> Table 4: Speed comparison between the ROOT Ruby module and PyROOT. Obviously, PyROOT is faster than the Ruby module. The real reason for this fact can’t be identified very easily. First of all, there is no official information of speed comparison between Python and Ruby, themselves. Even, if Python is ad hoc faster from Ruby, or vice versa, there is no official information about their C API’s performance. For example, Ruby might be faster than Python, but its C API might not be as rich and optimized as the Python one. On the other hand, PyROOT may be a better implementation of a ROOT interpreter interface than the Ruby module. Although, a technique of caching calls inside the Ruby module has been developed, a lot of additional research must be done, in order to identify spots that lack speed performance and cost of speed bottlenecks. However, the fact that there is not a huge speed difference (such as between ruby-root and CINT, for example) is very promising. 8 Case Study In this section, we will develop a small Ruby script, in order to see the practical use of the Ruby interpreter in ROOT. The script can be run using one of the latest ROOT distributions, which contains the Ruby module. The equivalent C++ script will not be shown, however we encourage anyone to try to create a C++ version of the script we will present. We believe it’s quite hard to accomplish the whole task, especially by writing a few lines of code. 8.1 Case Study Description Consider you are part of a worldwide collaboration\textsuperscript{15}. Now, a group in the collaboration has been assigned the task of collecting data from a specific source and then share the results with the other groups of the collaboration. This can be done, by collecting the data and then send to the other groups an e-mail with the results. A better way for the group is to place the collected data in a public Web server, so the others can visit a Web page everyday, download the data and make analysis on them. The whole process can be optimized by using cron jobs, in order the Web page to be everyday updated by the ‘collectors’ group or the data to be downloaded in local places in the workstations used by the collaboration. Although it’s easy to create a cron job in a system, in order to have a Web page updated every so on, it is quite verbose to force the whole collaboration to create their own cron jobs for a \textsuperscript{15}This is a very real life situation nowadays with the explosion of Internet and modern communications single download. So, in our case study we will present a way of optimizing the communication of our theoretical collaboration. We will substitute the 'collectors' group, with a script that produces 100 random numbers every time someone executes it by visiting a URL. Our task is to develop a Ruby script that will transparently fetch the data (the random numbers), fill a histogram with them and perform a Gaussian fit. The URL is not hypothetical, it is a real one, as it will be shown in the implementation. It is a PHP script located in a public Web server on the net, which produces 100 random numbers between -1 and 1. A sample of the data (100 random numbers between -1 and 1) is the one below: 0.846 -0.645 -0.585 0.033 -0.337 0.474 0.631 -0.184 0.203 -0.281 <br/> -0.004 0.94 -0.437 0.084 0.104 -0.285 0.963 -0.903 0.51 -0.711 <br/> 0.26 -0.583 -0.151 -0.057 -0.968 -0.492 0.562 0.434 0.232 -0.456 <br/> -0.36 0.078 -0.1 0.055 -0.89 0.564 -0.472 0.742 -0.62 0.732 <br/> -0.539 0.376 0.672 0.024 -0.54 -0.225 0.74 0.74 -0.578 -0.128 0.249 <br/> -0.288 -0.868 0.667 0.562 0.076 0.699 -0.931 -0.363 0.132 0.301 <br/> 0.181 0.773 -0.621 -0.919 -0.173 -0.511 0.645 0.356 -0.769 -0.976 <br/> 0.087 -0.308 0.401 -0.242 0.716 0.861 0.534 0.455 -0.717 -0.594 <br/> -0.296 -0.004 -0.462 -0.63 -0.443 0.614 -0.931 -0.374 -0.749 0.202 <br/> 0.928 0.432 -0.026 -0.694 0.514 0.802 -0.204 0.158 0.157 0.027 <br/> Notice the "br" tag, which is an HTML tag, used to format the data. We advise you to visit our case study’s URL to have a better feeling on what we are going to do. ### 8.2 Case Study Implementation We will try to develop the Ruby script for the task we described, step by step. First thing is to load all the Ruby libraries we need using the common Ruby require command: ```ruby require 'libRuby' require 'net/http' ``` The first statement loads the ROOT Ruby module. The second one loads another Ruby library (included by default in any Ruby distribution), which will help us to fetch our data from the Web server. \[\text{http://null.edunet.uoa.gr/~elathan/rr/demo/rr.php}\] Next step is to create a new TCanvas object, as well as a histogram of floats. If you are familiar with ROOT, this is probably an every-day task for you. Doing it in Ruby it is even easier than doing it in C++: ```ruby c1 = TCanvas.new("c1","Ruby Module Case Study",200,10,600,400) c1.SetGrid main = TH1F.new("main","Main",100,-4,4) main.SetFillColor kRed ``` Notice that we can freely omit parentheses. If the above snippet makes you feel uncomfortable, we advise you to have a look at the B Appendix. Next step, is to fetch our data, using a single line of Ruby code: ```ruby data = Net::HTTP.get('null.edunet.uoa.gr', '/~elathan/rr/demo/rr.php') ``` That is, 'data' is a string that contains all the information we want to use. Ruby has done all of it for us. It has allocated space, it has made all the required communication with the Web server and it gave us a string that contains the 100 random numbers. Since we are interested only in the numbers we must eliminate the 'br' tags: ```ruby data.gsub!(/\<br\>/, "") ``` If you are not aware of Regular Expressions, you might feel uncomfortable with the above line, which substitutes with an empty string all the 'br' tags in our 'data' string. Regular Expressions is a common technology used in Computer Science for pattern matching. Analyzing how Regular Expressions work is beyond of this thesis. ROOT contains its own Regular Expression library. However, we use Ruby's native Regular Expression support for easiness. Now, our 'data' string has all the numbers we want to insert in our histogram separated by spaces. It would be handy to store them in an array: ```ruby entries = data.split(" ") ``` 'entries' is an array of 100 strings, each one holding one of our 100 random numbers in text representation. The way we got 'entries' is really quite trivial. We instructed Ruby to split the string in elements, using as a separator the space character. Now, we are ready to iterate in our 'entries' array and fill our histogram: ```ruby entries.each do |entry| main.Fill(entry.to_f) main.Draw("e1p") c1.Update end ``` The above snippet is a Ruby iterator. Ruby iterates in our 'entries' array. In each iteration the variable 'entry' points to the current element of the 'entries' array. In each iteration we fill the histogram with the 'entry' variable and we update the screen. Notice that we use the 'to_f' method in order to convert the string in a float representation. Last thing is to perform the fit, to update the screen for the last time and force ROOT in loop mode, so as to be able to inspect the results: ```ruby main.Fit("gaus", "q1") main.Draw("same") c1.Modified c1.Update gApplication.Run ``` The whole script is presented in Appendix C. A screenshot of the output can be seen in Figure 2. ## 9 Acknowledgements ROOT's Ruby Interpreter Interface could never been achieved without the valuable help and contribution of people from the scientific community. Thus, I would like to thank sincerely my Supervisor, Associate Professor of the Physics Department at the University of Athens, Dr. G. Tzanakos, who encouraged me and helped me during the whole implementation of ruby-root. Dr. G. Tzanakos was the first one to guide me in the HEP world and enlight me about the software modern HEP experiments use. He was one of the few people, who believed that I can complete this project. Also, I would like to acknowledge Rene Brune, Fons Rademakers and Masahuro Goto from the ROOT team, who helped me with various technical issues regarding the ROOT framework, Juan Alcarez for the contribution of some benchmarks written in Ruby and Axel Naumann for porting ruby-root in the Microsoft Windows platform. Finally, I would like to thank all the Ruby programmers at the ruby-talk mailing list for the critical answers they gave me regarding technical Ruby aspects. 10 Appendices A References [2] Minos Status and Physics Goals (George S. Tzanakos) B Migrating from C/C++ to ruby-root Assuming you are already familiar with the ROOT framework and you have used it to construct C/C++ macros or even C/C++ applications which utilize the ROOT functionality, below we will present a few rules in order to migrate easily your C/C++ work to Ruby. Each rule might be different for ruby-root and for the ROOT Ruby module. B.1 Constructors Ruby constructors are a little bit different than the C++ ones. The Ruby equivalent to: ```c++ TPad *pad = new TPad(); ``` is: ```ruby pad = TPad.new() ``` Keep in mind, that the parentheses can be omitted. **Availability:** Both ruby-root and the ROOT Ruby module support this rule. B.2 Method Calling In C++ there are two ways to call a member function of a class’ instance: ```c++ TPad *pad = new TPad(); pad->Draw(); ``` or: ```c++ TPad pad; pad.Draw(); ``` In Ruby there is a global way to call a method of an object. This can be seen also from the previous rule. However, in order to make it clear, we present the rule of calling methods in Ruby, here: pad = TPad.new() pad.Draw() Keep in mind, that the parentheses can be omitted. **Availability:** Both ruby-root and the ROOT Ruby module support this rule. ### B.3 TApplication If you don’t want your script to end immediately after execution, but loop until you quit from it, then you have to write: ``` tapp = TApplication.new "name" ...enter your code here... ``` tapp.Run **Availability:** Both ruby-root and the ROOT Ruby module support this rule. However, in the ROOT Ruby module, we advise you to use the idiom: ``` ...enter your code here... ``` gApplication.Run ### B.4 C++ Explicit Casts A common practice for C++ users is to grab objects through C++ explicit casts, like: ``` TTree *t1 = (TTree*)f->Get("t1"); ``` In ruby-root, you can do this in the following way: ``` t1 = f.Get("t1").to_ttree ``` The rule is to use the method `to_` followed by the ROOT class in lower case. **Availability:** This rule is available only in ruby-root. The equivalent rule for the ROOT Ruby module is: t1 = f.Get("t1").as("TTree") In the near future the second idiom will be used for both ruby-root and the ROOT Ruby module. B.5 ROOT Collections All ROOT Collections (TList, TClonesArray, etc.), as well as TArray Classes are implicitly converted to Ruby arrays and vice versa. Also, everything which looks in C++ as an array (i.e. `Double_t *foo`) is converted to a Ruby array and vice versa. The following example demonstrates this: ```ruby bases = TClass.new("TPad").GetListOfBases bases.each do |b| p b end ``` For a practical use of this rule, see the multigraph.rb script which is part of the ruby-root testsuite. **Availability:** This rule is available only in ruby-root. A substantial effort to implement this rule in the ROOT Ruby module has been done and in the near future there is a plan to commit the required code to the ROOT CVS tree. B.6 #to_ary Some times a ROOT Collection may embed another ROOT Collection in one of its slots. In this case you will have to explicit convert the second collection to a Ruby array using the `#to_ary` method. The technique is illustrated in the FirstContour.rb script, which is part of the ruby-root testsuite. **Availability:** This rule is available only in ruby-root. A substantial effort to implement this rule in the ROOT Ruby module has been done and in the near future there is a plan to commit the required code to the ROOT CVS tree. B.7 C++ Enumerations Not all of the enumerations are supported, but you have access to the most frequently used ones, in the most non-surprising way. I.e. the following is acceptable: ```c foo.SetColor kRed ``` **Availability:** Both ruby-root and the ROOT Ruby module support this rule. B.8 C++ Globals Heavily used globals such as gStyle, gROOT and others are supported, with an exception: do not use gBenchmark, since there is a bug. Instead use: ```c gBenchmark = TBenchmark.new.Start("bench") ``` **Availability:** Both ruby-root and the ROOT Ruby module support this rule. B.9 C++ References C++ References are packed and returned as a Ruby array. So a C++ call: ```c gRandom->Rannor(&x,&y); ``` will be in ruby-root: ```ruby x, y = gRandom.Rannor ``` If the C++ method returns a value, the latter is the last element of the returned Ruby array. **Availability:** This rule is available only in ruby-root. A substantial effort to implement this rule in the ROOT Ruby module has been done and in the near future there is a plan to commit the required code to the ROOT CVS tree. B.10 Function Pointers A C/C++ pointer to function is handled in ruby-root as a Ruby user defined method. So, whenever you want to pass a pointer to function, you can pass the symbol of your Ruby method in the following fashion: ```ruby def background(x, par) return par[0] + par[1]*x[0] + par[2]*x[0]*x[0] end backFcn = TF1.new("backFcn", :background, 0, 3, 3) ``` Notice the leading ":" when the Ruby method is passed to the TF1 constructor. **Availability:** This rule is available only in ruby-root. A substantial effort to implement this rule in the ROOT Ruby module has been done and in the near future there is a plan to commit the required code to the ROOT CVS tree. B.11 ROOT Trees and TTree#via At the time of writing, ruby-root supports the construction of ROOT Trees with doubles, strings and integers. In the latter case there is a bug which leaks the memory. So, use integers in TTrees with care. ruby-root introduces a new TTree#via method in order to fill a TTree. The following example demonstrates this: ```ruby # fill the tree r = TRandom.new 10000.times do |i| px, py = r.Rannor t1.via :SetBranchAddress, :Fill, { "px" => px, "py" => py, "pz" => px*px + py*py } end ``` For a full example see the treerr.rb and cernbuild.rb scripts, which are part of the ruby-root testsuite. Constructs like TTree#via will be heavily used in ruby-root in the near future. **Availability:** This rule is available only in ruby-root. A substantial effort to implement this rule in the ROOT Ruby module has been done and in the near future there is a plan to commit the required code to the ROOT CVS tree. ### B.12 Floating values and arithmetic Ruby’s representation of floats is ’a.b’, where ’a’ is the integer part and ’b’ the decimal one. Thus, ’1.’ or ’.1’ is not acceptable. However ruby-root is smart to accept ’1’ when the original C++ function requires a float argument, but reject ’1.0’ when the original C++ function requires an integer argument. **Availability:** Both ruby-root and the ROOT Ruby module support this rule. ### B.13 Boolean checks All C++ boolean types are converted to Ruby booleans, but remember that in Ruby false is only ’false’ and ’nil’. Thus, ’0’ is true! So, keep in mind that you have to explicit check if a method returned ’0’: ```ruby if (i && (i%UPDATE) == 0) # if 0 is true! ``` **Availability:** Both ruby-root and the ROOT Ruby module support this rule. ### C Scripts #### C.1 Benchmark Scripts stress16.rb: ```ruby # Original port of stress16 ROOT benchmark for RubyRoot # Author: Juan Alcaraz <Juan.Alcaraz@cern.ch> # # Minor adjustments for ruby-root: elathan def stress16 ``` Prototype trigger simulation for the LHCb experiment This test nested loops with the interpreter. Expected to run fast with the compiler, slow with the interpreter. This code is extracted from an original macro by Hans Dijkstra (LHCb) The program generates histograms and profile histograms. A canvas with subpads containing the results is sent to Postscript. We check graphics results by counting the number of lines in the ps file. nbuf = 153 # buffer size nlev = 4 # number of trigger levels nstep = 50000 # number of steps # time needed per trigger itt = [1000, 4000, 40000, 400000] # acceptance/trigger (last always 0) a = [ 0.25, 0.04, 0.25, 0.0 ] #--->int i, il, istep, itm[192], itrig[192], it, im, ipass; #--->float dead, sum[10]; # create histogram and array of profile histograms gRandom.SetSeed pipe = TH1F.new("pipe", "free in pipeline", nbuf+1, -0.5, nbuf+0.5) hp = [] TProfile.Approximate for i in 0..nlev s = "buf%d" % i hp[i] = TProfile.new(s, "in buffers", 1000, 0, nstep, -1.0, 1000.0) end dead = 0 sum = [nbuf] + [0]*nbuf itrig = [0]*nbuf itim = [0]*nbuf nsteps = 0...nstep nbufs = 0...nbuf nlevs = 0...nlev for istep in nsteps # evaluate status of buffer pipe.Fill(sum[0]) if (istep+1)%10 == 0 for i in 0..nlev hp[i].Fill(Float(istep), sum[i], 1.0) end end ipass = 0 for i in nbufs it = itrig[i] if it >= 1 # add 25 ns to all times itim[i] += 25 im = itim[i] # level decisions for il in nlevs if it == il+1 and im > itt[il] if gRandom.Rndm > a[il] itrig[i] = -1 sum[0] += 1 sum[il+1] -= 1 else itrig[i] += 1 sum[il+1] -= 1 sum[il+2] += 1 end end end end end 36 elsif ipass == 0 itrig[i] = 1 tim[i] = 25 sum[0] -= 1 sum[1] += 1 ipass += 1 end dead += 1 if ipass == 0 end gbench = TBenchmark.new gbench.Start("stress16") stress16 gbench.Show("stress16") hsum.rb: # ruby-root testsuite # port of the original $ROOT/hsum.C tutorial # (20/01/2004) --elathan <elathan@phys.uoa.gr> # # original header: # To see the output of this macro, # click begin_html <a href="gif/hsum.gif" >here</a> end_html # Simple example illustrating how to use the C++ interpreter # to fill histograms in a loop and show the graphics results gROOT.Reset c1 = TCanvas.new("c1","The HSUM example",200,10,600,400) c1.SetGrid 37 gBenchmark = TBenchmark.new.Start("hsum") # Create some histograms. total = TH1F.new("total","This is the total distribution",100,-4,4) main = TH1F.new("main","Main contributor",100,-4,4) s1 = TH1F.new("s1","This is the first signal",100,-4,4) s2 = TH1F.new("s2","This is the second signal",100,-4,4) total.Sumw2 # this makes sure that the sum of # squares of weights will be stored # Fill histograms randomly nrnd = TRandom.new.SetSeed kUPDATE = 500 total.SetMarkerStyle(21) total.SetMarkerSize(0.7) main.SetFillColor(16) s1.SetFillColor(42) s2.SetFillColor(46) slider = nil 10000.times do |i| xmain = rnd.Gaus(-1,1.5) xs1 = rnd.Gaus(-0.5,0.5) xs2 = rnd.Landau(1,0.15) main.Fill(xmain) s1.Fill(xs1,0.3) s2.Fill(xs2,0.2) total.Fill(xmain) total.Fill(xs1,0.3) total.Fill(xs2,0.2) if (i && (i%kUPDATE) == 0) if (i == kUPDATE) total.Draw("elp") main.Draw("same") s1.Draw("same") s2.Draw("same") end end c1.Update slider = TSlider.new("slider","test", 4.2,0,4.6,total.GetMaximum,38) slider.SetFillColor(46) end slider.SetRange(0,i/10000.0) if slider c1.Modified c1.Update end end slider.SetRange(0,1) total.Draw("sameaxis") # to redraw axis hidden by the fill area c1.Modified gBenchmark.Show("hsum") gApplication.Run C.2 Case Study Script require 'libRuby' require 'net/http' c1 = TCanvas.new("c1","Ruby Module Case Study",200,10,600,400) c1.SetGrid main = TH1F.new("main","Main",100,-4,4) main.SetFillColor kRed data = Net::HTTP.get('null.edunet.uoa.gr', '/~elathan/rr/demo/rr.php') data.gsub!(/\<br\>/, "") entries = data.split(" ") entries.each do |entry| main.Fill(entry.to_f) main.Draw("e1p") c1.Update end main.Fit("gaus", "q1") main.Draw("same") c1.Modified c1.Update gApplication.Run
{"Source-Url": "https://root.cern.ch/download/RubyRoot.pdf", "len_cl100k_base": 15056, "olmocr-version": "0.1.50", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 70770, "total-output-tokens": 17282, "length": "2e13", "weborganizer": {"__label__adult": 0.00031447410583496094, "__label__art_design": 0.0003597736358642578, "__label__crime_law": 0.0001583099365234375, "__label__education_jobs": 0.0008330345153808594, "__label__entertainment": 8.809566497802734e-05, "__label__fashion_beauty": 0.00012290477752685547, "__label__finance_business": 0.00014865398406982422, "__label__food_dining": 0.00033402442932128906, "__label__games": 0.0005483627319335938, "__label__hardware": 0.0009889602661132812, "__label__health": 0.0003108978271484375, "__label__history": 0.00022721290588378904, "__label__home_hobbies": 0.00010722875595092772, "__label__industrial": 0.00033926963806152344, "__label__literature": 0.00019979476928710935, "__label__politics": 0.00020682811737060547, "__label__religion": 0.0004887580871582031, "__label__science_tech": 0.01468658447265625, "__label__social_life": 0.00010836124420166016, "__label__software": 0.00579833984375, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.0002472400665283203, "__label__transportation": 0.00030231475830078125, "__label__travel": 0.00016224384307861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60370, 0.04671]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60370, 0.6175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60370, 0.8426]], "google_gemma-3-12b-it_contains_pii": [[0, 190, false], [190, 1186, null], [1186, 2837, null], [2837, 4855, null], [4855, 6089, null], [6089, 7941, null], [7941, 9892, null], [9892, 12251, null], [12251, 15582, null], [15582, 17685, null], [17685, 19578, null], [19578, 20600, null], [20600, 22313, null], [22313, 24250, null], [24250, 26313, null], [26313, 28130, null], [28130, 29814, null], [29814, 32232, null], [32232, 34617, null], [34617, 36024, null], [36024, 36893, null], [36893, 38317, null], [38317, 40247, null], [40247, 42484, null], [42484, 44576, null], [44576, 46266, null], [46266, 47914, null], [47914, 48652, null], [48652, 49707, null], [49707, 50719, null], [50719, 52123, null], [52123, 53220, null], [53220, 54676, null], [54676, 55936, null], [55936, 57002, null], [57002, 57875, null], [57875, 58554, null], [58554, 59569, null], [59569, 60286, null], [60286, 60370, null]], "google_gemma-3-12b-it_is_public_document": [[0, 190, true], [190, 1186, null], [1186, 2837, null], [2837, 4855, null], [4855, 6089, null], [6089, 7941, null], [7941, 9892, null], [9892, 12251, null], [12251, 15582, null], [15582, 17685, null], [17685, 19578, null], [19578, 20600, null], [20600, 22313, null], [22313, 24250, null], [24250, 26313, null], [26313, 28130, null], [28130, 29814, null], [29814, 32232, null], [32232, 34617, null], [34617, 36024, null], [36024, 36893, null], [36893, 38317, null], [38317, 40247, null], [40247, 42484, null], [42484, 44576, null], [44576, 46266, null], [46266, 47914, null], [47914, 48652, null], [48652, 49707, null], [49707, 50719, null], [50719, 52123, null], [52123, 53220, null], [53220, 54676, null], [54676, 55936, null], [55936, 57002, null], [57002, 57875, null], [57875, 58554, null], [58554, 59569, null], [59569, 60286, null], [60286, 60370, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60370, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60370, null]], "pdf_page_numbers": [[0, 190, 1], [190, 1186, 2], [1186, 2837, 3], [2837, 4855, 4], [4855, 6089, 5], [6089, 7941, 6], [7941, 9892, 7], [9892, 12251, 8], [12251, 15582, 9], [15582, 17685, 10], [17685, 19578, 11], [19578, 20600, 12], [20600, 22313, 13], [22313, 24250, 14], [24250, 26313, 15], [26313, 28130, 16], [28130, 29814, 17], [29814, 32232, 18], [32232, 34617, 19], [34617, 36024, 20], [36024, 36893, 21], [36893, 38317, 22], [38317, 40247, 23], [40247, 42484, 24], [42484, 44576, 25], [44576, 46266, 26], [46266, 47914, 27], [47914, 48652, 28], [48652, 49707, 29], [49707, 50719, 30], [50719, 52123, 31], [52123, 53220, 32], [53220, 54676, 33], [54676, 55936, 34], [55936, 57002, 35], [57002, 57875, 36], [57875, 58554, 37], [58554, 59569, 38], [59569, 60286, 39], [60286, 60370, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60370, 0.04669]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
dc031032f7a9194eae7166020eb2f11c0e7d8b88
SIMD Intrinsics on Managed Language Runtimes Alen Stojanov Department of Computer Science ETH Zurich Switzerland astojanov@inf.ethz.ch Ivaylo Toskov Department of Computer Science ETH Zurich Switzerland itoskov@student.ethz.ch Tiark Rompf Department of Computer Science Purdue University USA tiark@purdue.edu Markus Püschel Department of Computer Science ETH Zurich Switzerland pueschel@inf.ethz.ch Abstract Managed language runtimes such as the Java Virtual Machine (JVM) provide adequate performance for a wide range of applications, but at the same time, they lack much of the low-level control that performance-minded programmers appreciate in languages like C/C++. One important example is the intrinsics interface that exposes instructions of SIMD (Single Instruction Multiple Data) vector ISAs (Instruction Set Architectures). In this paper we present an automatic approach for including native intrinsics in the runtime of a managed language. Our implementation consists of two parts. First, for each vector ISA, we automatically generate the intrinsics API from the vendor-provided XML specification. Second, we employ a metaprogramming approach that enables programmers to generate and load native code at runtime. In this setting, programmers can use the entire high-level language as a kind of macro system to define new high-level vector APIs with zero overhead. As an example use case we show a variable precision API. We provide an end-to-end implementation of our approach in the HotSpot VM that supports all 5912 Intel SIMD intrinsics from MMX to AVX-512. Our benchmarks demonstrate that this combination of SIMD and metaprogramming enables developers to write high-performance, vectorized code on an unmodified JVM that outperforms the auto-vectorizing HotSpot just-in-time (JIT) compiler and provides tight integration between vectorized native code and the managed JVM ecosystem. CCS Concepts · Computer systems organization → Single instruction, multiple data; · Software and its engineering → Virtual machines; Translator writing systems and compiler generators; Source code generation; Runtime environments; Keywords SIMD instruction set, Managed Languages, JVM, Scala, Staging, Metaprogramming ACM Reference Format: 1 Introduction Managed high-level languages are designed to be portable and to support a broad range of applications. For the programmer, the price is reduced access to detailed and low-level performance optimizations. In particular, SIMD vector instructions on modern architectures offer significant parallelism, and thus potential speedup, but neither languages like Java, JavaScript, Python, or Ruby, nor their managed runtimes provide direct access to SIMD facilities. This means that SIMD optimizations, if available at all, are left to the virtual machine (VM) and the built-in just-in-time (JIT) compiler to carry out automatically, which often leads to suboptimal code. As a result, developers may be pushed to use low-level languages such as C/C++ to gain access to the intrinsics API. But leaving the high-level ecosystem of Java or other languages also means to abandon many high-level abstractions that are key for the productive and efficient development of large-scale applications, including access to a large set of libraries. To reap the benefits of both high-level and low-level languages, developers using managed languages may write low-level native C/C++ functions, that are invoked by the managed runtime. In the case of Java, developers could use the Java Native Interface (JNI) to invoke C functions with specific naming conventions. However, this process of dividing the application logic between two languages creates a significant gap in the abstractions of the program, limits code reuse, and impedes clear separation of concerns. Further, the native code must be cross-compiled ahead of time, which is an error-prone process that requires complicated pipelines to support different operating systems and architectures, and thus directly affects code maintenance and refactoring. To address these problems, we propose a systematic and automated approach that gives developers access to SIMD instructions in the managed runtime, eliminating the need to write low-level C/C++ code. Our methodology supports the entire set of SIMD instructions in the form of embedded domain-specific languages (eDSLs) and consists of two parts. First, for each architecture, we automatically generate ISA-specific eDSLs from the vendor’s XML specification of the SIMD intrinsics. Second, we provide the developer with the means to use the SIMD eDSL to develop application logic, which automatically generates native code inside the runtime. Instead of executing each SIMD intrinsic immediately when invoked by the program, the eDSLs provide a staged or deferred API, which accumulates intrinsic invocations along with auxiliary scalar operations and control flow, batches them together in a computation graph, and generates a native kernel that executes them all at once, when requested by the program. This makes it possible to interleave SIMD intrinsics with the generic language constructs of the host language without switching back and forth between native and managed execution, enabling programmers to build both high-level and low-level abstractions, while running SIMD kernels at full speed. This paper makes the following contributions: 1. We present the first systematic and automated approach that supports the entire set of SIMD instructions, automatically generated from the vendor specification, in a managed high-level language. The approach is applicable to other low-level languages, provided support for native code binding in the managed high-level language. 2. In doing so, we show how to use metaprogramming techniques and runtime code generation to give back low-level control to developers in an environment that typically hides architecture-specific details. 3. We provide an end-to-end implementation of our approach within the HotSpot JVM, which provides access to all Intel SIMD intrinsics from MMX to AVX-512. 4. We show how to use the SIMD eDSLs to build new abstractions using host language constructs. Programmers can use the entire managed language as a form of macro system to define new vectorized APIs with zero overhead. As an example, we present a “virtual ISA” for variable precision arithmetic. 5. We provide benchmarks that demonstrate significant performance gains of explicit SIMD code versus code auto-vectorized by the HotSpot JIT compiler. Our work focuses on the JVM and Intel SIMD intrinsics functions, but would equally apply to other platforms. For the implementation of computation graphs and runtime code generation, we use the LMS (Lightweight Modular Staging) compiler framework [20]. 2 Background We provide background on intrinsics functions, JVMs, and the LMS metaprogramming framework that we use. 2.1 Intrinsics Intrinsics are compiler-built-in functions that usually map into a single or a small number of assembly instructions. During compilation, they are inlined to remove calling overhead. This way they provide the programmer with assembly-like functionality, without having to worry about register allocation and instruction scheduling. SIMD intrinsics give access to data parallel instructions in vector ISAs, such as NEON on ARM processors, or the SSE and AVX families on Intel. We focus on the x86 architecture and the associated SIMD intrinsics that are available in modern C/C++ compilers, such as GCC, Clang/LLVM, and Intel ICC. Specifically, these include the following ISAs: - MMX - operating on 64-bit wide registers; provides integer operations only. - SSE / SSE2 / SSE3 / SSSE3 / SSE4.1 / SSE4.2 - operating on 128-bit wide registers; provides integer, 32-bit, and 64-bit floating point operations, string operations and cache and memory management operations. - AVX / AVX2 - ISAs that expand the SSE operations to 256-bit wide registers and provide extra operations for manipulating non-contiguous memory locations. - FMA - an extension to SSE and AVX ISAs to provide fused multiply add operations. - AVX-512 - extends AVX to operate on 512-bit registers and consists of multiple parts called F / BW / CD / DQ / ER / IFMA52 / PF / VBMI / VL. - KNC - the first production version of Intel’s Many Integrated Core (MIC) architecture that provides operations on 512-bit registers. Additionally, we also include: - SVML - an intrinsics short vector math library, built on top of the ISAs mentioned above. shows the number of VM, as it is tuned to maximize peak operating speed. The provides JIT compilation of Java bytecode as a black box. The developer has no control over, nor receives any feedback on the compilation phases except through coarse-grained command-line and debug options [16]. There are two flavors of the VM: a client mode focused on latency, and a server mode tuned for throughput. We only focus on the Server VM, as it is tuned to maximize peak operating speed. The Server VM offers a tiered compilation of bytecode using the C1 and C2 compilers. C1 is a fast, lightly optimizing bytecode compiler, while C2 performs more aggressive optimizations. When JVM applications are started, the HotSpot VM starts interpreting bytecode. It detects computation-intensive hot spots in the code via profiling, and proceeds to compile the bytecode of frequently used functions with C1. Once further thresholds are reached, functions may be compiled using C2. C2 supports autovektorization, using Superword Level Parallelism (SLP) [11]. SLP detects groups of isomorphic instructions and replaces them with SIMD instructions, which results in a lightweight vectorization. The SLP approach is limited and cannot optimize across loop iterations, nor can it detect idioms such as reductions. <table> <thead> <tr> <th>ISA</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>MMX</td> <td>124</td> </tr> <tr> <td>SSE</td> <td>154</td> </tr> <tr> <td>SSE2</td> <td>236</td> </tr> <tr> <td>SSE3</td> <td>32</td> </tr> <tr> <td>SSE41</td> <td>61</td> </tr> <tr> <td>SSE42</td> <td>19</td> </tr> <tr> <td>AVX</td> <td>188</td> </tr> <tr> <td>AVX2</td> <td>191</td> </tr> <tr> <td>AVX-512</td> <td>3857</td> </tr> <tr> <td>FMA</td> <td>32</td> </tr> <tr> <td>KNC</td> <td>601</td> </tr> <tr> <td>SVML</td> <td>406</td> </tr> </tbody> </table> 2.2 Java Virtual Machines There are many active implementations of the JVM including the open source IBM V9 [8], Jikes RVM [4], Maxine [29], JRockit [15], and the proprietary SAP JVM [24], CEE-J [26], JamaicaVM [25]. HotSpot remains the primary reference JVM implementation that is used by both Oracle Java and OpenJDK. Each of the JVM implementations provides support for Java Standard Edition or Micro Edition, tailored for a particular need: either a particular target machine or microar- chitecture, embedded systems, operating system, provides additional garbage collector, resource control or parallelism model. However, none of the active JVM implementation provides any support for explicit vectorization nor intrinsics, nor permits inlining of assembly code directly in the Java source due to portability issues. The HotSpot JVM, which is the focus of this study, pro- vides JIT compilation of Java bytecode as a black box. The developer has no control over, nor receives any feedback on the compilation phases except through coarse-grained command-line and debug options [16]. There are two flavors of the VM: a client mode focused on latency, and a server mode tuned for throughput. We only focus on the Server VM, as it is tuned to maximize peak operating speed. The Server VM offers a tiered compilation of bytecode using the <table> <thead> <tr> <th>ISA</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>MMX</td> <td>124</td> </tr> <tr> <td>SSE</td> <td>154</td> </tr> <tr> <td>SSE2</td> <td>236</td> </tr> <tr> <td>SSE3</td> <td>32</td> </tr> <tr> <td>SSE41</td> <td>61</td> </tr> <tr> <td>SSE42</td> <td>19</td> </tr> <tr> <td>AVX</td> <td>188</td> </tr> <tr> <td>AVX2</td> <td>191</td> </tr> <tr> <td>AVX-512</td> <td>3857</td> </tr> <tr> <td>FMA</td> <td>32</td> </tr> <tr> <td>KNC</td> <td>601</td> </tr> <tr> <td>SVML</td> <td>406</td> </tr> </tbody> </table> 2.3 Lightweight Modular Staging Lightweight Modular Staging (LMS) [20, 21] is a framework for runtime code generation and for building compilers for embedded DSLs in Scala. LMS makes pervasive use of op- erator overloading to make code generation blend in with normal programming. The core abstraction is a type con- structor Rep[T] that marks code expressions. For example, excuting a + b where a and b are two Rep[Int] expressions will create a program expression that represents the addi- tion a’ + b’, where a’ and b’ are the program expres- sions a and b evaluate to. This form of operator overloading is ex- tended to if/else expressions and other built-in constructs [2, 19]. The combined program expression can be unparsed to source code, in this paper to C or LLVM code, compiled dynamically, and loaded into the running JVM. 3 Intrinsics in the JVM In this section, we present our two-tier approach for making the Intel SIMD intrinsics available in the JVM. First we automatically generate SIMD eDSLs, each implemented as a Scala class that corresponds to one of the 13 vector ISAs in Figure 1b. Then, we show how to use these eDSLs to generate <table> <thead> <tr> <th>ISA</th> <th>Count</th> </tr> </thead> <tbody> <tr> <td>MMX</td> <td>124</td> </tr> <tr> <td>SSE</td> <td>154</td> </tr> <tr> <td>SSE2</td> <td>236</td> </tr> <tr> <td>SSE3</td> <td>32</td> </tr> <tr> <td>SSE41</td> <td>61</td> </tr> <tr> <td>SSE42</td> <td>19</td> </tr> <tr> <td>AVX</td> <td>188</td> </tr> <tr> <td>AVX2</td> <td>191</td> </tr> <tr> <td>AVX-512</td> <td>3857</td> </tr> <tr> <td>FMA</td> <td>32</td> </tr> <tr> <td>KNC</td> <td>601</td> </tr> <tr> <td>SVML</td> <td>406</td> </tr> </tbody> </table> Table 2. Type mappings between JVM and C/C++ types. <table> <thead> <tr> <th>JVM Types</th> <th>C/C++ Types</th> </tr> </thead> <tbody> <tr> <td>Float</td> <td>float</td> </tr> <tr> <td>Double</td> <td>double</td> </tr> <tr> <td>Byte</td> <td>char</td> </tr> <tr> <td>Short</td> <td>int16_t</td> </tr> <tr> <td>Int</td> <td>int32_t</td> </tr> <tr> <td>Long</td> <td>int64_t</td> </tr> <tr> <td>__m256d</td> <td>128-bit float</td> </tr> <tr> <td>__m512d</td> <td>256-bit float</td> </tr> <tr> <td>__m128d</td> <td>128-bit integer</td> </tr> <tr> <td>__m256i</td> <td>128-bit integer</td> </tr> <tr> <td>__m512i</td> <td>256-bit integer</td> </tr> <tr> <td>__m128i</td> <td>128-bit integer</td> </tr> </tbody> </table> 3.1 Type System for SIMD Intrinsics in the JVM The JVM has no notion of SIMD vector types, thus we build abstract classes to mark the type of DSL expressions that represent SIMD intrinsics functions in LMS: ```scala case class Def[T] // define function type case class Exp[T] // define expression type ``` SIMD intrinsics functions take primitive arguments that correspond to low-level C/C++ primitive types. The primitive types in the JVM exhibit a fixed width, and therefore a direct mapping can be established with C/C++ primitives. Some intrinsics however, require the use of unsigned types that are not supported natively in the JVM: ```scala unsigned int __mm_crc32_u16 (unsigned int, unsigned short) ``` To mitigate this problem, we use the Scala Unsigned package, which implements unsigned types and operations on top of the signed types available in the JVM. Table 2 shows the type mapping between the 12 primitives, which in most cases is straightforward, except for JVM char that maps to int16_t to support UTF-8. Arrays of primitive types in the JVM and C/C++ code are isomorphic and both represent continuous memory space of a certain primitive type. Therefore Array[T] maps to a memory pointer T* in the low-level SIMD intrinsics. 3.2 Automatic Generation of ISA-specific eDSLs LMS provides a relatively simple interface to define eDSLs, but adding more than 5000 functions by hand would be tedious and error prone. Our approach generates the LMS eDSLs automatically from the XML specification provided by the Intel Intrinsics Guide; these are then packed as a jar file that is later published in the Maven Central Repository [5] for deployment. At the time of writing, we use the latest version of the intrinsics specifications stored as data-3.3.16.xml file, and build the generator such that it anticipates future extensions in the specifications. Figure 1 shows a high-level overview of the generation process, which we explain step-by-step next. **Figure 1.** Generating SIMD intrinsics eDSLs from vendor specification. ```xml <intrinsic rettype='__m256d' name='_mm256_add_pd'> <type>Floating Point</type> <CPUID>AVX</CPUID> <category>Arithmetic</category> <parameter varname='a' type='__m256d'/> <parameter varname='b' type='__m256d'/> <description> Add packed double-precision (64-bit) floating-point elements in "a" and "b", and store the results in "dst". </description> <operation> FOR j := 0 to 3 i := j*64 ENDFOR dst[MAX:256] := 0 </operation> <instruction form='ymm, ymm, ymm'/> <header>immintrin.h</header> </intrinsic> ``` **Figure 2.** XML specification of the _mm256_add_pd intrinsic. function, return type, ordered list of each argument of the function with the corresponding type, CPUID parameter, that correspond to the ISA set, and a category parameter. **Generate ISA-specific DSL in LMS.** For each intrinsic function we implement four building blocks that define the dDSL. These are represented in terms of implementation classes provided by the LMS framework. The classes Def[T] and Exp[T] together define a computation graph. Subclasses of Def[T] implement graph nodes that represent individual computations, e.g., Plus(a, b). Here, a and b are values of type Exp[T]; either constants Const(.) or symbols Sym(id) that refer to other graph nodes through a numeric index id. The four necessary building blocks are as follows: 1. Definition of the intrinsic function represented as a subclass of Def[T]. 2. Implicit conversion from expression Exp[T] to definition Def[T], looking up a computation node given a symbolic reference. 3. Mirroring function that converts a Def[T] into expression Exp[T], potentially applying a transformation. 4. Unparsing routine that converts each Def[T] into C/C++ string. To complete the first part, we define IntrinsicsDef[T], an abstract class that each intrinsics definition will inherit: ```scala abstract class IntrinsicsDef[T: Manifest] extends Def[T] { val category: List[IntrinsicsCategory] val intrinsicType: List[IntrinsicsType] val performance: Map[MicroArchType, Performance] val header: String } ``` Then for each intrinsic function, we define a Scala case class that corresponds to the intrinsics function’s name, its input arguments and return type. Each case class contains the category, the type of the intrinsics and performance informations when available. Additionally, we also include the header where the C/C++ intrinsics is defined: ```scala case class MM256_ADD_PD( a: Exp[_m256d], b: Exp[_m256d]) { val category = List(Arithmetic) val intrinsicType = List(FloatingPoint) val performance = Map.empty[MicroArchType, Performance] val header = "immintrin.h" } ``` With the current definition we allow a particular intrinsics to pertain to several categories. The header information gives us the control to include the correct header when unparsing the code to C/C++ code. Performance information is included but is not used in the staging process. Next we generate Scala code for the implicit conversion from intrinsics expressions to definitions. This routine is essential in LMS, as it provides automatic conversion of the staged code into static single assignment (SSA) form. In most cases it is sufficient to rely on the Scala compiler to automatically perform the implicit conversion: ```scala def _mm256_add_pd(a: Exp[_m256d], b: Exp[_m256d]) = MM256_ADD_PD(a, b) ``` The LMS framework supports DSL transformations by substitution. Once a substitution is defined, LMS creates new definitions. However, when no substitution is available, a definition has to be converted to an expression through a routine of mirroring that converts a Def[T] back to Exp[T], potentially creating a new definitions for subexpressions as part of the transformation: ```scala override def mirror[A: Type](e: Def[A], f: Transformer)(implicit pos: SourceContext): Exp[A] = (e match { case MM256_ADD_PD(a, b) => _mm256_add_pd(f(a), f(b)) // ... a lot more patterns to match case _ => super.mirror(e, f) }) ``` Once code is generated for all these routines and for each intrinsic, the final step is to generate code to perform the unparsing of the DSL into C code. The unparsing routine is done similarly to the mirroring routine, by pattern matching each DSL definition to produce the corresponding C expression: ```scala override def emitNode(s: Sym[Any], r: Def[Any]) = r match { case iDef@MM256_ADD_PD(a, b) => headers += iDef.header emitValDef(s, "_mm256_add_pd($a, $b)") // ... a lot more patterns to match case _ => super.emitNode(s, rhs) } ``` **Infer intrinsic mutability** As mentioned before, when code is generated for the implicit conversion of an intrinsics expression to the intrinsics definition, we can rely on the Scala compiler to match the correct implicit method. This works correctly for immutable expressions, but not all intrinsics are immutable. For example, each intrinsics that loads and stores from/to memory creates effects that have to be handled by LMS. The semantic of these effects is essential in scheduling the DSL. To resolve this problem, we use the category information of each intrinsics (see Figure 2), and implement a conservative heuristic to generate the effects: - Each time an intrinsics is discovered with a load category, we generate a read effect on each argument that is a memory location. - Each time an intrinsics is discovered with a store category, we generate a write effect on each argument that is a memory location. For example, an AVX load of 4 doubles has the form: ```scala def _mm256_load_pd[A, U: Integral]( // ... a lot more patterns to match emitValDef(s, "_mm256_load_pd($a, $b)") //... a lot more patterns to match } ``` Start developing SIMD code 1. Implement a native function placeholder 2. Mixin one or several ISA-specific eDSLs 3. Implement the SIMD staged function 4. Call compile to generate native code Start the Java Virtual Machine (JVM) Detect available C/C++ compilers Inspect the system through CPUID Infer available ISAs and compiler flags LMS: remove abstraction & generate C code Compile and link the code to the JVM Use high performance SIMD code The heuristics is invoked on each intrinsics that performs loads, stores, maskstores, maskloads, gather, scatters and other intrinsics that perform memory-related operations. Split each ISA specific DSL into subclasses The JVM has a hard limit of 64KB on the size of each method, which is an obstacle in generating the unparsing and mirroring routines for large ISA, such as AVX-512 or KNC. To avoid this obstacle, and still keep the LMS design pattern, we decided to split the ISA specific DSLs into subclasses that inherit each other. 3.3 Developing Explicitly Vectorized Code in the JVM Using SIMD eDSLs Figure 3 gives a high-level overview of how to use explicit vectorization in the JVM. The process consists of two parts: compile-time tasks, done by the high-performance code developer, and runtime tasks that are done automatically by LMS and our compiler pipeline. Specifically, the compile-time tasks of the developer comprise four steps: 1. Implement a native function placeholder that will represent the vectorized code. 2. Create a DSL instance by instantiating one or mixing several ISA-specific eDSLs. 3. Implement the SIMD logic as a staged function. 4. Call the provided compile routine to generate, compile and link the code in the JVM. Figure 4. A complete implementation of the BLAS 1 routine SAXPY in the JVM using AVX and FMA SIMD intrinsics. After the four steps are completed, and the JVM program is started, the compiler pipeline is invoked with the compile routine. This will perform system inspection, search for available compilers and opportunistically pick the optimal compiler available on the system. In particular, it will attempt to find icc, gcc or llvm/clang. After a compiler is found, the runtime will determine the target CPU, as well as the underlying micro-architecture to derive available ISAs. This allows us to have full control over the system, as well as to be able to pick the best mix of compiler flags for each compiler. Once this process is completed, the user-defined staged function is executed, which assembles a computation graph of SIMD instructions. From this computation graph, LMS generates vectorized C code. This code is then automatically compiled as a dynamic library with the set of derived compiler flags, and linked back into the JVM. To link the native code into the JVM, JNI requires the C functions header to contain the Java_prefix, followed by package name, class name and name of the native function. The compile routine automates this process using JVM reflection and some lightweight use of Scala macros. By this automation, we ensure the interoperability between the native function and the staged function, creating code robust to modifications and refactoring and eliminating the need for the developer to recompile the native code each time major code revision are performed on the low-level code or the class container. Figure 4 illustrates a complete and self-contained implementation of a BLAS 1 routine called SAXPY [12], which computes \( y = y + ax \) for given vectors \( x \), \( y \) and scalar \( a \). The expression for \( \text{loop}(\ldots) \) creates a staged loop in the LMS computation graph. 3.4 Evaluation To assess the viability of our approach we consider two ubiquitous kernel functions: the aforementioned SAXPY and matrix multiplication. **Experimental setup.** We perform the tests on a Haswell enabled processor Intel Xeon CPU E3-1285L v3 3.10GHz with 32GB of RAM, running Debian GNU/Linux 8 (jessie), kernel 3.16.43-2+deb8u3. The available compilers are gcc 4.9.2-10 and Intel icc 17.0.0. The installed JVM is HotSpot 64-Bit Server 25.144-b01, Supporting Java 1.8. To avoid the effects of frequency scaling and resource sharing on the measurements, Turbo Boost and Hyper-Threading are disabled. We use ScalaMeter [17] to perform the benchmarks. To obtain precise results, we select a pre-configured benchmark that forks a new JVM virtual machine and performs measurements inside the clean instance. The new instance has a compilation threshold of 100 (-XX:CompileThreshold=100) and we perform at least 100 warm-up runs on all test cases to trigger the JIT compiler. Each test case is performed on a warmed cache. Tests are repeated 30 times, and the median of the runtime is taken. We show the results as performance, measured in flops per cycle. To inspect the JIT compilation of the HotSpot JVM, we use -XX:UnlockDiagnosticVMOptions that unlocks the diagnostic JVM options and -XX:CompileCommand=print to output the generated assembly. In all test cases we observe full-tiered compilation starting from the C1 compiler to the last phase of the C2 compiler. For a fair comparison between the JVM and our generated intrinsics code, we consider the C2 compiled version of the bytecode only, excluding the JIT warm-up time and the LMS generation overhead. **SAXPY** We compare the generated SAXPY vector code, shown in Figure 4, against an equivalent Java implementation: ```java public class JSaxpy { public void apply(float[] a, float[] b, float s, int n) { for (int i = 0; i < n; i++) a[i] = b[i] * s; } } ``` Figure 6a shows the performance comparison. First we note the similarity in performance, which is not surprising since SAXPY has low operational intensity and the simplicity of the code enables efficient autovectorization. Indeed, the assembly diagnostics confirms this but reveals that the JVM only uses SSE whereas our staged version uses AVX and FMA, which explains the better performance for larger sizes. For small sizes that are L1 cache resident the Java implementation does better. This is because JNI methods are not inlined and incur additional cost to be invoked. **Matrix-matrix multiplication (MMM).** For the second benchmark we chose MMM, which has a high operational intensity and is known to benefit from various optimizations such as blocking and vectorization [32]. We consider three versions. The first is a standard Java implementation of a triple MMM loop. The two other versions are a blocked version of MMM, with block size of 8, the first implemented in Java, and the second implemented using AVX intrinsics in Scala. For simplicity, we assume that the matrix has size $n = 8k$, and provide the implementation in Figure 5. Our Scala implementation uses the SIMD intrinsics with high level constructs of the Scala language, including pattern matching (lines 5, 10, 19, 20, etc), lambdas (lines 4, 10, 34), Scala collections (lines 4, 34, etc), closures (line 45) and others that are not available in low-level C code. Once LMS removes the abstraction overhead, the MMM function results in a high-performance implementation. The performance comparison in Figure 6b shows that the use of explicit vectorization through SIMD intrinsics can offer improvements up to 5x over the blocked Java implementation, and over 7.8x over the baseline triple loop implementation. The assembly analysis shows that the C2 compiler will unroll the hot loops in both Java versions, but does generate SIMD instructions, which explains the low performance. **Automatic SIMD eDSL generator.** The predecessor of the Intel Intrinsics Guide web application was a Java application sharing the same name. The older versions of both Java and the web application contained older version of the intrinsics specifications, e.g., without AVX-512. However, Intel does not offer these versions, and continuously updates the XML specifications, improving the description / performance of each intrinsic function. Using tools such as the Wayback Machine, a digital archive that mirrors web-site states at a given date, we were able to salvage older, pre-captured iterations of the intrinsics specifications, shown in Table 3. Then we instructed our eDSL generator to re-generate each ISA-specific eDSL. Our results show that our eDSL generator is robust towards minor changes on the XML specifications, being able to retrospectively generate eDSLs for recent years. We believe that if Intel uses the same XML schema for new releases, our generator should be robust to new ISA updates, as long as the new ISA has similar properties than its predecessor. ### 3.5 Limitations Our approach provides low-level control for performance optimizations to the Java developer but comes at a price. We discuss a number of technical issues that would be good to resolve to further improve ease-of-use and maintainability. Currently, there is no mechanism to ensure the isomorphism between the native function placeholder and the staged function. As a result, it is the responsibility of the developer to define this isomorphic relation upon compile time. The current use of Scala macros makes the code robust in terms of refactoring and modifications, which is quite convenient compared to manually maintaining isomorphism between native C/C++ and JVM code. A more diligent use of Scala macros could potentially resolve this problem and ensure complete isomorphic binding of JNI and staged functions. LMS does not provide any mechanism to deal with exceptions such as segfaults from generated code. Therefore it is the responsibility of the developer to write valid SIMD code. LMS is also not optimized for fast code generation, --- **Figure 6. Performance analysis: Java implementation vs LMS intrinsics generated code.** **Table 3. Intel Intrinsics Guide XML specifications.** <table> <thead> <tr> <th>Specification</th> <th>Date</th> <th>Specification</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>data-3.3.11.xml</td> <td>27.07.2015</td> <td>data-3.4.xml</td> <td>07.09.2017</td> </tr> </tbody> </table> which might result in an overhead surpassing the HotSpot interpretation speed when used to generate functions that are computationally light. Another limitation is a consequence of the complex memory model in the HotSpot JVM. Once arrays are used in the native code, GetPrimitiveArrayCritical must be invoked to obtain the memory space of the array. Depending on the state of the garbage collector (GC), the array might end up on different segments on the heap, which could result in a copy once the native code tries to access the memory space, or the JVM could decide to temporary disable the GC. Although we did not experience an array copy in any test case performed, we believe that the use of LMS intrinsics is best suited for compute-bound problems, where the copy overhead can be leveraged by the fast runtime of SIMD instructions upon each JNI invocation. Some of the issues with JVM arrays can be avoided by using Java NIO buffers or off-heap memory allocated with the sun.misc.Unsafe package. 4 Build Your Own Virtual ISA In the previous section, we demonstrated the use of SIMD intrinsics in developing high performance code, based on high-level constructs of the Scala language. However, with the use of metaprogramming and staging provided by LMS, we can also use the SIMD intrinsics to build new low-level abstractions and provide a functionality similar to the SVML short vector math library that is typically implemented by low-level C/C++ compilers. As an example, we build abstractions for low-precision arithmetic and, in particular, building blocks for the stochastic gradient descent (SGD) algorithm. SGD is currently among the most popular algorithms in machine learning, used for training neural networks [1, 22]. It consists of two main building blocks: a dot-product operator and a scale-and-add operator. The use of low precision is an important optimization in SGD for deep learning as it reduces both computation time and data movement for increased performance and efficiency [30]. In this section we build a virtual variable-precision ISA that implements the dot product operator, operating on arrays of 32, 16, 8 and 4-bit precision. For 32 and 16-bit we use floating point, which is natively supported by the hardware; for the lower precision formats, we use quantized arrays [30]. Quantization is a lossy compression technique that maps continuous values to a finite set of fixed-bit-width numbers. For a given vector \( \mathbf{v} \) of size \( n \) and precision of \( b \) bits, we first derive a factor \( s_b \) that scales the vector elements \( v_i \) into the representable range: \[ s_b = \frac{2^{b-1} - 1}{\max_{i \in [1, n]} |v_i|} \] The scaled \( v_i \) are then quantized stochastically: \[ v_i \rightarrow [v_i \cdot s_b + \mu] \] where \( \mu \) is drawn uniformly from the interval \((0, 1)\). With this, a quantized array consists of one scaling factor and an array of quantized \( b \)-bit values. 4.1 Implementation 32-bit. For the 32-bit version, we use the built-in hardware support for floating point. In Java, we can only express scalar (non-vectorized) code to multiply / add values; using our LMS intrinsics we have access to AVX2 and FMA. 16-bit. For the 16-bit version we use a half-precision floating point format, available as a separate ISA extension called FP16C that provides instructions to convert a 32-bit float into 16-bit float, and vice-versa. We use these instructions to load and store the data in 16-bit format, and perform computations on the 32-bit format. In Java, there is no access to half-precision floating point; thus, instead, we quantize the values as shown before to type short. 8-bit. We base our 8-bit version on Buckwild! [23]. Both LMS intrinsics and the Java implementation operate on quantized values, using a scaling factor and an array of type byte to hold the 8-bit two’s complements values. 4-bit. We base this version on the analysis in the ZipML framework [33]. The values are not two’s complement, but sign-bit followed by the base in binary format, and stored as pairs inside the 8-bit values of a byte array. Scala implementation. In Scala, we can abstract the precision as a number that reflects the bit length of the format, and provide two virtual intrinsics functions: ```scala def dot_ps_step(b: Int): (_: Int) => Int = acc => mm256_setzero_ps(); val increment = dot_ps_step(8); forloop(0, len, fresh[Int], increment, i => { acc = mm256_add_ps(acc, mm256_setzero_ps()); result = mm256_add_ps(acc, mm256_setzero_ps()); } val result = mm256_add_ps(acc, mm256_setzero_ps()); ``` ```scala def dot_ps_step(bits: Int): (_: Int, void*: x, void*: y) => Int = acc => mm256_setzero_ps(); dot_ps_step returns the number of elements processed by dot_ps for a given bit length. For example, in the case of 32, 16 and 8-bit versions, 32 elements are processed at a time and in the case of the 4-bit, 128 elements at a time. ``` Finally, the resulting dot product with variable precision is a sum reduction of 8 floats stored in the acc variable: ```scala var acc = mm256_setzero_ps(); val increment = dot_ps_step(bits); forloop(0, len, fresh[Int], increment, i => { acc = mm256_add_ps(acc, mm256_setzero_ps()); } reduce_sum(acc) } ``` shows the obtained results. Our 4-bit implementation outperforms HotSpot by a factor of up to 40x, the 8-bit up to 9x, the 16-bit up to 4.8x, and the 32-bit version up to 5.4x. There are several reasons for the speedups obtained with the use of SIMD intrinsics. In the 32-bit case, we see the limitation of SLP to detect and optimize reductions. In the 16-bit, there is no way to obtain access in Java to an ISA such as FP16C. And in the 8-bit and 4-bit case, Java is severely outperformed since it does type promotion when dealing with integers. However, the largest speedup of 40x in the 4-bit case is due to the domain knowledge used for implementing the dot product, that the HotSpot compiler cannot synthesize with a lightweight autovectorization such as SLP. 5 Related Work We review different lines of related work. Explicit vectorization in the JVM. The first approach to expose data parallelism was the implementation of the Java Vectorization Interface (JVI) as part of the Jitrino JIT compiler [13]. JVI is designed as an abstract vector interface that provides set of methods as vector operators. This methods are later compiled to different vector interface, such as SSE and AVX. The approach offers competitive results in some case, but is limited in the SIMD instructions it supports and subsequent iterations of post-AVX ISAs. Similarly to JVI, Oracle has ongoing research developing cross-platform APIs that can leverage SIMD instructions. Implemented as part of an experimental JVM called Panama [9], SIMD instructions are used in immutable vector types, parameterized by element type and size. Similarly to JVI, Panama also suffers from limited support of vector ISAs, and requires a specific JVM. Both approaches abstract SIMD instructions, which limits the ability of a developer to tune the code to a particular microarchitecture. Autovectorization in the JVM. Initially introduced in the Jikes RVM [4], the HotSpot JVM uses SLP [11] based autovectorization. SLP is limited and is only able to vectorize basic blocks consisting of groups of isomorphic instructions, generating SSE and AVX code. Partial support of FMA and AVX-512 is only planned for Java 9 [9]. Figure 7. Performance analysis: Variable Precision. Support of low-level code in the JVM. Sulong [18] is a system to execute low-level languages such as C/C++, Fortran, Ada, and Haskell in the JVM. Sulong is capable of handling low-level languages that compile to LLVM, using LLVM IR interpreter built on top of the Truffle framework [31], running on the Graal VM. While this approach can bring a support of low-level instructions in the JVM, it does not support SIMD instructions, as Graal does not provide sufficient analysis for vectorization. Furthermore, due to interpretation, Sulong is shown to be outperformed by native compilers such as gcc. Automatic generation of DSLs in LMS. DSLs have been generated into LMS before. Yin-Yang [10] automatically generates deep DSL embeddings from their shallow counterparts by reusing the core translation. Forge [28] generates DSLs from a declarative specification. None of the approaches have been challenged to generate DSLs of the scale imposed by the large amount of SIMD intrinsics, nor were they designed to automatically infer effects of mutability. SIMD intrinsics in LMS. A limited support of SIMD instructions has been introduced while abstracting vectors architectures [27]. This approach has been used in generating libraries for high-performance code, and integration with the JVM has not been demonstrated. On an even lower level, LMS has been used to define domain-specific ISAs by generate specialized hardware [6, 7]. 6 Conclusion Our work shows how metaprogramming techniques can be used to bridge the gap between high-level managed languages and the need to access low-level instructions in high performance code development. Specifically, we showed how to provide access to SIMD intrinsics in the HotSpot JVM, thus eliminating the need to write C/C++ code. Two key techniques underlie our approach. First is the use of embedded DSLs to express intrinsics inside JVM languages such as Scala. These are generated directly from the vendor XML specification, which enables complete intrinsics support and fast updates in the future. Second is the use of staging to convert SIMD intrinsics interspersed with Scala code into high-performant C kernels, which are then compiled and linked via JNI. The challenge in our work is in the systematic handling of large sets of functions, converting them into sets of DSLs, automatically inferring their side effects, and creating a compiler and code generation pipeline for convenient and productive development. We show how the SIMD support in JVM can be used to build powerful high-level and low-level abstractions while offering significant, often manifold speedup over the autovectorizing HotSpot JIT compiler. A Artifact Description Submission and reviewing guidelines and methodology: http://cTuning.org/ae/submission-20160509.html A.1 Abstract To reproduce the results presented in our work, we provide an artifact that consist of two parts: - lms-intrinsics, a precompiled jar library that includes all Intel-based SIMD intrinsics functions, implemented as Scala eDSLs in LMS. - NGen, a runtime implemented in Scala and Java, that enables the use of lms-intrinsics in the JVM and includes the experiments discussed in our work. The SIMD based eDSLs follow the modular design of the LMS framework and are implemented as an external LMS library, separated from the JVM runtime. This allows the standalone use of lms-intrinsics, enabling LMS to generate x86-vectorized code outside the context of the JVM. The JVM runtime (NGen) demonstrates the use of the lms-intrinsics by providing the compiler pipeline to generate, compile, link, and execute the LMS-generated SIMD code and has a strong dependency on this library. The experiments included in the artifact come in the form of microbenchmarks. While the most convenient deployment for this artifact would have been a Docker image through Collective Knowledge, we decided to eliminate the overhead imposed by the containers and provided a bare metal deployment that aims at providing as precise results as possible for our tests. To achieve that, we use the SBT (Simple Build Tool) to build and execute our experiments. A.2 Description A.2.1 Check-List (Artifact Meta Information) - **Algorithm:** Using SIMD intrinsics in the JVM. Experiments include dot-product on quantized arrays, BLAS routines: SAXPY and Matrix-Matrix-Multiplication. - **Compilation:** lms-intrinsics is a precompiled library, compiled with Scala 2.11 and is available as a jar bundle, accessible through Maven. NGen requires Scala 2.11 and Java 1.8 for compilation. Both NGen and lms-intrinsics generate C code that is compiled with GCC, ICC, or LLVM. - **Transformations:** To make SIMD instructions available in the JVM, NGen uses LMS as a staging framework. The user writes vectorized code in the eDSL in Scala, and NGen stages the code through multiple compile phases before execution. - **Binary:** lms-intrinsics is a jar bundle. NGen includes binaries for SBT v0.13.6, as well as a small library for CPUID inspection and Sigar v1.6.5_01 (System Information Gatherer And Reporter https://github.com/hyperic/sigar) binaries. NGen has various dependencies on precompiled libraries that include Bridj, Apache Commons, ScalaMeter, Scala Virtualized, LMS, and lms-intrinsics. SBT automatically pulls all dependencies and their corresponding versions. - **Data set:** Our experiments operate with random data, requiring no data set. A.2.3 Hardware Dependencies lms-intrinsics as well as NGen are able to generate C code that can run on any x86 and x86-64 architecture supporting Intel ISAs. However, the full set of our experiments requires at least a Haswell machine. Namely: - **SAXPY and MMM algorithms** are implemented using AVX and FMA ISAs, and therefore require at least a Haswell enabled process. Broadwell, Skylake, Kaby Lake or later would also work. - The dot product of the quantized arrays relies on AVX2, and FMA flags, but also uses the hardware random number generator, requiring the RDRAND ISA, as well FP16C to deal with half-precision floats. We recommend disabling Intel Turbo Boost and Hyper-Threading technologies to avoid the effects of frequency scaling and resource sharing on the measurements. Note that these technologies can be easily disabled in the BIOS settings of the machines that have BIOS firmware. Many Apple-based machines, such as the MacBook or others, do not have a user accessible BIOS firmware, and could only disable Turbo Boost using external kernel modules such as Turbo Boost Switcher (https://github.com/rugarciap/Turbo-Boost-Switcher). A.2.4 Software Dependencies lms-intrinsics is a self-contained precompiled library and all of its software dependencies are handled automatically through Maven tools such as SBT. To build and run NGen, the following dependencies must be met: - Git client, used by SBT to resolve dependencies. - Java Development Kit (JDK) 1.8 or later. - C compiler such as GCC, ICC or LLVM. After installing the dependencies, it is quite important to have the binary executables available in the $PATH. This way, the SBT tool will be able to process all compilation phases as well as to execute the experiments. Make sure that the following commands work on your terminal: ``` > git --version gcc --version java --version javac --version ``` It is also important to ensure that the installed JVM has an architecture that GCC can compile to. This is particularly important for Windows users: the 32-bit MinGW port of GCC will fail to compile code for 64-bit JVM. A.3 Installation The artifact can be cloned from the GitHub repository: ``` git clone https://github.com/astojanov/NGen ``` The artifact already includes a precompiled version of SBT. Therefore, to start the SBT console, we run: ``` cd NGen # For Unix users: ./bin/sbt/bin/sbt # For Windows users bin\sbt\bin\sbt.bat ``` Once started, we can compile the code using: ``` > compile ``` Once invoked, SBT will automatically pull lms-intrinsics as well as all other dependencies and start the compilation. A.4 Experiment Workflow Once SBT compiles the code, we can proceed with evaluating our experiments. We do this through the SBT console. To inspect the testing machine through the NGen runtime we use: ``` > test-only cgo.TestPlatform The runtime will be able to inspect the CPU, identify available ISAs and compilers and inspect the current JDK. If the test platform is successfully identified, we can continue with the experiments. **Generating SIMD eDSLs.** The lms-intrinsics bundle includes the automatic generator of SIMD eDSLs, invoked by: ``` > test-only cgo.GenerateIntrinsics ``` The Scala eDSLs (coupled with statistics) will be generated in the Generated_SIMD_Intrinsics folder. **Explicit vectorization in the JVM.** To run the experiments depicted in our work, we use: ``` > test-only cgo.TestSaxpy > test-only cgo.TestMMM > test-only cgo.TestPrecision ``` In the case of SAXPY, if the testing machine is not Haswell based, we provided an architecture-independent implementation of SAXPY: ``` > test-only cgo.TestMultiSaxpy ``` Each result shows the size of our microbenchmarks, and the obtained performance in flops/cycle. ### A.5 Evaluation and Expected Result In the evaluation of the experiment workflow, we expect LMS to produce correct vectorized code using lms-intrinsics. Furthermore, we expect the performance results to behave consistent with the results shown in this paper, outperforming the JVM on the microarchitectures that support our experiments. Finally, we expect the automatic generation of eDSLs to be easily adjustable to subsequent updates on the Intel Intrinsics specifications. ### A.6 Experiment Customization There are many opportunities for customization. We can use NGen to easily develop vectorized code, and we can use ScalaMeter to adjust the current benchmarks. **Developing SIMD code.** The NSaxpy.scala class, available in src/ch/ethz/acl/ngen/saxpy/, provides detailed guidelines for the usage of SIMD in Scala. Following the comments in the file, as well as the structural flow of the program, one can easily modify the skeleton to perform other vectorized computations. **Customizing Benchmarks.** Each performance experiment, uses ScalaMeter and is implemented as a Scala class. The Matrix-Matrix-Multiplication includes BenchMMM.scala located in src/ch/ethz/acl/ngen/mmm/. The implementation allows changes to various aspects of the benchmarks, including the size and the values of the input data, warm up times, different JVM invocations, etc. ### Acknowledgments This research was partially supported under NSF awards 1553471 and 1564207, and DOE award DE-SC0018050. The authors also wish to thank Kevin J. Brown for the inspiring discussion that lead to the creation of our low precision ISA. ### References
{"Source-Url": "https://astojanov.github.io/publications/preprint/004_cgo18-simd.pdf", "len_cl100k_base": 11784, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 52853, "total-output-tokens": 15097, "length": "2e13", "weborganizer": {"__label__adult": 0.000385284423828125, "__label__art_design": 0.0002841949462890625, "__label__crime_law": 0.0002760887145996094, "__label__education_jobs": 0.0005974769592285156, "__label__entertainment": 6.0617923736572266e-05, "__label__fashion_beauty": 0.00018131732940673828, "__label__finance_business": 0.0002218484878540039, "__label__food_dining": 0.00033855438232421875, "__label__games": 0.00055694580078125, "__label__hardware": 0.0018815994262695312, "__label__health": 0.0004987716674804688, "__label__history": 0.00027942657470703125, "__label__home_hobbies": 0.00010794401168823242, "__label__industrial": 0.0005660057067871094, "__label__literature": 0.00021958351135253904, "__label__politics": 0.00027489662170410156, "__label__religion": 0.0006442070007324219, "__label__science_tech": 0.0289764404296875, "__label__social_life": 7.87973403930664e-05, "__label__software": 0.00453948974609375, "__label__software_dev": 0.95751953125, "__label__sports_fitness": 0.00035881996154785156, "__label__transportation": 0.000690460205078125, "__label__travel": 0.00022470951080322263}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57682, 0.03648]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57682, 0.32609]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57682, 0.83568]], "google_gemma-3-12b-it_contains_pii": [[0, 3925, false], [3925, 9083, null], [9083, 13730, null], [13730, 17044, null], [17044, 22212, null], [22212, 25101, null], [25101, 28395, null], [28395, 32440, null], [32440, 37821, null], [37821, 40062, null], [40062, 45492, null], [45492, 48345, null], [48345, 54206, null], [54206, 57682, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3925, true], [3925, 9083, null], [9083, 13730, null], [13730, 17044, null], [17044, 22212, null], [22212, 25101, null], [25101, 28395, null], [28395, 32440, null], [32440, 37821, null], [37821, 40062, null], [40062, 45492, null], [45492, 48345, null], [48345, 54206, null], [54206, 57682, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57682, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57682, null]], "pdf_page_numbers": [[0, 3925, 1], [3925, 9083, 2], [9083, 13730, 3], [13730, 17044, 4], [17044, 22212, 5], [22212, 25101, 6], [25101, 28395, 7], [28395, 32440, 8], [32440, 37821, 9], [37821, 40062, 10], [40062, 45492, 11], [45492, 48345, 12], [48345, 54206, 13], [54206, 57682, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57682, 0.122]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
bd3d7efddacd4ff6c3be5422bf820f55e5ec9faa
**Sync Kit: A persistent client-side database caching toolkit for data intensive websites** The MIT Faculty has made this article openly available. **Please share** how this access benefits you. Your story matters. <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td><strong>As Published</strong></td> <td><a href="http://dx.doi.org/10.1145/1772690.1772704">http://dx.doi.org/10.1145/1772690.1772704</a></td> </tr> <tr> <td><strong>Publisher</strong></td> <td>Association for Computing Machinery</td> </tr> <tr> <td><strong>Version</strong></td> <td>Author’s final manuscript</td> </tr> <tr> <td><strong>Citable link</strong></td> <td><a href="http://hdl.handle.net/1721.1/63112">http://hdl.handle.net/1721.1/63112</a></td> </tr> <tr> <td><strong>Terms of Use</strong></td> <td>Creative Commons Attribution-Noncommercial-Share Alike 3.0</td> </tr> <tr> <td><strong>Detailed Terms</strong></td> <td><a href="http://creativecommons.org/licenses/by-nc-sa/3.0/">http://creativecommons.org/licenses/by-nc-sa/3.0/</a></td> </tr> </tbody> </table> Sync Kit: A Persistent Client-Side Database Caching Toolkit for Data Intensive Websites Edward Benson, Adam Marcus, David Karger, Samuel Madden {eob,marcua,karger,madden}@csail.mit.edu MIT CSAIL ABSTRACT We introduce a client-server toolkit called Sync Kit that demonstrates how client-side database storage can improve the performance of data intensive websites. Sync Kit is designed to make use of the embedded relational database defined in the upcoming HTML5 standard to offload some data storage and processing from a web server onto the web browsers to which it serves content. Our toolkit provides various strategies for synchronizing relational database tables between the browser and the web server, along with a client-side template library so that portions web applications may be executed client-side. Unlike prior work in this area, Sync Kit persists both templates and data in the browser across web sessions, increasing the number of concurrent connections a server can handle by up to a factor of four versus that of a traditional server-only web stack and a factor of three versus a recent template caching approach. Categories and Subject Descriptors: H.2 [Information Systems]: Database Management Keywords: Cache, Client-side, Web, Browser. 1. INTRODUCTION To support the increasingly sophisticated and feature-rich applications “hosted” on the web today, programmers who deploy them must run complex and sophisticated server infrastructures. Unlike desktop applications, where all of the processing and computation is done locally, most web applications rely on the server to fetch and process data and, often, to render that data into HTML for presentation on the client. Building servers that can scale is a tremendous challenge, of which a significant component is managing load against back-end database systems. Indeed, many companies (most famously Twitter) have experienced widely publicized database-system failures due to their tremendous growth, and have invested great effort into sophisticated database partitioning and caching infrastructures to reduce and spread database load. The tight coupling of applications to a web server can also generate significant latency in user interactions. Web applications are located “far” from the data they present to users, and each data item may take many milliseconds to retrieve as requests are sent to servers in different parts of the world. When many data items are retrieved, latencies can grow to be seconds long. Asynchronous technologies such as AJAX allow web applications to remain responsive during these requests, but designing highly responsive web applications in the face of these latencies, especially on bandwidth-impaired devices such as phones, remains a challenge. One way to address the database load problem and latency problem would be to offload data processing to web browsers. If browsers could access some or all of the data needed to satisfy a request from a local cache, then pages would load faster and server-side databases would do less work. This is especially true in applications such as Facebook or Twitter where the same data items (e.g., posts on a user’s news feed) are sent again and again as users reload pages in search of new messages. In such scenarios, traditional web caching methodologies are too coarse-grained—web pages may share a significant amount of data across repeated accesses, but page content is still dynamic. Fortunately, the upcoming HTML5 standard includes a client-side persistent database API that makes it possible to instantiate and store databases inside the web-browser. The standard specifies that this store is to be a SQL-compliant database accessible through a JavaScript API. Though it was initially designed to support offline access to web applications via cached data, the same technology can be used to offload data processing and reduce server-communication latency even when the client is online. To exploit this technology, however, application developers have to manually manage the cache, modifying their queries to take explicit advantage of the client-side data, and writing code to determine if cached results are still valid, and to merge cached results with updates from the server. To address this complexity, we built a toolkit, Sync Kit, that allows application developers to easily and efficiently take advantage of client-databases. Sync Kit provides a library of data structures for commonly used synchronization patterns (e.g., a queue of objects, each of which corresponds to one result in a database query) that programmers use to express their application. When these structures are used, Sync Kit generates code for the client that populates a client-side cache of recently accessed items and reuses the contents of the cache to produce web pages, fetching new or uncacheable results from the backend server automatically. Because database results are cached client-side, Sync Kit provides a templating library that allows programmers to describe how to generate a web page from the cached query results. Furthermore, Sync Kit’s caches can be shared across a user’s sessions; for example, if a user quits her browser and then restarts it hours later, the cache from the previous session can still be used. In summary, the contributions of this work are to: - Identify data access patterns that are widely used on the web, and build corresponding data structures that programmers can use to build their applications while benefiting from client-side database caching that can achieve the same level of data consistency as their original applications. - Demonstrate that these data structures, when used properly, can reduce server load. On two realistic benchmarks based on a blog and a wiki, Sync Kit reduces server load by a factor of three versus our implementation of a recently published template-caching approach and a factor of four versus the traditional web hosting stack. The data structures also significantly reduce data transfer to the client to up to 5% of its amount in the traditional stack. - Show that these ideas can be integrated into Sync Kit, a practical and easy to use web-programming toolkit that requires no browser modifications and can be implemented via a simple app-server framework that runs on the server. 2. ARCHITECTURE AND OVERVIEW In this section, we describe the basic operation of Sync Kit, including the programming model it presents to developers and the flow of data as it presents a page to users. 2.1 Programming Model Programmers using Sync Kit are required to design two types of structures: data endpoints, which describe the data that will be used to render a web page, and templates that describe how to generate HTML from data endpoints. In our examples, we use the SQL language to describe data served by data endpoints, but in practice different web programming frameworks (e.g., Django, Ruby) may provide their own query language. The queries and data structures are written in terms of database tables to which the server has access. In the remainder of this section, we introduce our programming model through a series of examples. 2.1.1 Data Endpoints An example data endpoint for a simple blogging application is shown in Figure 1; it defines two queries, entry_data and tag_data. entry_data defines a list of the ten most recent blog entries that will be presented to the user, and tag_data defines a set of tags describing those entries. Note the use of the %entry_data% syntax that makes it possible for one data endpoint to reference another as a nested query. In this example, we say that entry_data, whose contents decide the results of the tag_data query, is the parent of tag_data. The resulting relations from these endpoint queries can then be used in templates. For example, the template shown in Figure 2 constructs an HTML page that contains one block on the page for each blog entry and its tags. Note that templates also include SQL, where the tables that are accessible to these queries correspond to the endpoints defined in the endpoint definition. ```python def blogDataEndpoint(clientRequestData): entry_data = QUERY("SELECT id, author, title, body, lastModified FROM entries WHERE published = True ORDER BY lastModified DESC LIMIT 10") tag_data = QUERY("SELECT entryid, tagid, tagstring FROM tags WHERE entryid IN (SELECT id FROM %entry_data%)") return HTTPResponse(to_json([entry_data, tag_data])) ``` Figure 1: An example data endpoint containing two data sets, entry_data and tag_data. The endpoint returns JSON data to the client to fill in a template. 2.1.2 Templates Data endpoints are made visible to the web user by combining them with templates. These templates work much like the templates used in any popular server-side web programming environment, except they are executed on the client-side, using a JavaScript template parser, instead of being executed on the server. Though not as popular as server-side solutions, a number of client-side template libraries are currently in use on the web [14, 5, 3], all of which operate around the similar principle that some data object, expressed in JSON, is to be combined with a declarative template to produce the final HTML for a page. XSLT [11] also provides a standard way to transform structured data results into web pages. Any of these existing client-side template languages could be adapted to work with Sync Kit’s manner of operation. Sync Kit simply needs some way to combine data results and template code once the client is in possession of both. Our current template library is based off of the HTML5 Microdata specification. This specification provides a way to use HTML attributes to relate data entities and their properties to regions of the web document, much like RDFa. Other HTML5 additions allow for the creation of custom tag attributes that we use to supplement this semantic markup with the types of simple control structures that all template languages must provide. Figure 2 provides an example of what a Sync Kit template looks like for a table of blog entries. We make use of both upcoming HTML5 tags and attributes in this example. ```html <section id="blog_posts" data-query="SELECT title, author, body FROM entry_data ORDER BY lastModified DESC LIMIT 10" data-as="Entry"> <article itemscope itemtype="Entry"> <h2 itemprop="title">Entry</h2> By <span class="author" itemprop="author"></span> <div class="contents" itemprop="body"></div> </article> </section> ``` Figure 2: A Sync Kit template for a list of blog entries. The template uses data endpoints from Figure 1 as tables referenced from SQL queries. In Figure 2, we see an ordinary HTML fragment decorated with tag attributes that define the way in which data from the client-side database (having been synchronized using the data endpoints previously specified) should fill the template. On the section element, the data-query attribute specifies the SQL query to perform and the data-as property provides a name with which to reference a row from the result (Entry). Later, an item is defined (using the itemscope attribute) that has the same class type as the query result rows – this lets the template library know that this portion of HTML is to be repeated for each row of the result. The template library then iterates over the rows of the Entry result, filling in the template for each row, resulting in the output shown in Figure 3, shown truncated to only one blog post. This output is both a properly rendered HTML fragment and a machine-readable HTML5 Microdata document. ```html <section id="blog_posts"> <article itemscope itemtype="Entry"> <h2 itemprop="title">My Thoughts on HTML5</h2> <p itemprop="author">Tim Berners-Lee</p> <div class="contents" itemprop="body"> The HTML5 working group has been... </div> </article> </section> ``` **Figure 3: The output of the template in Figure 2.** 2.1.3 Putting the Pieces Together Using the Sync Kit framework, web developers do not have to veer far from their current mental model of operation. Web developers are used to thinking about their programs as being run on the server and delivered to the client. Under this new model, the same remains true, but instead of delivering a rendered page, the server delivers a tuple that specifies both the template to be rendered and the data endpoints that provide the data to fill the template. Using the blog example, this tuple is "blog.html", [entry_data, tag_data]. The Sync Kit framework handles the rest, ensuring data synchronization between client and server databases, template operations on the client, and caching using both HTTP and HTML5 Manifest caching schemes. We now describe how this synchronization is actually performed, resulting in a substantial reduction in load and bandwidth on the server. 2.2 Execution Model At a high level, Sync Kit execution is handled by the Sync Kit server, which runs inside the web server (as a collection of scripts for the Django [1] web framework, in our implementation), and the Sync Kit client library (written in JavaScript), which is run by the client browser. We now describe Sync Kit’s operation with no template or data caching. When the browser requests a Sync Kit-enabled page, the Sync Kit server sends it the template, as in Figure 2. Complex pages are stored across multiple templates, which are stitched together. This template also references Sync Kit’s client library, synckit.js, and contains all data endpoint definitions used by the template. The Synckit client library registers a callback into the browser’s `onLoad` event, and takes control once the page template has loaded in the browser. The Sync Kit client library then sends an asynchronous HTTP request to the server and requests the current data for all endpoints in the page template. The server sends back a JSON object containing this data, which the client library uses to populate the template it received. So far, we have described a mode of operation that is essentially the way most AJAX applications work. From a bandwidth and server load perspective, this approach is only marginally more efficient than a “traditional” architecture where the server is responsible for populating the template. The AJAX approach reduces bandwidth consumption slightly if the size of a repeated template is large compared to the size of the data being used to populate each instance of the template, for example. There are two opportunities for performance gains through caching, however, which can dramatically improve the situation. First, if a template has been fetched before, it does not need to be fetched again for some time. In some cases, this can result in significant bandwidth savings, as was noted by Tatsubori and Suzumura [28]. The second and more interesting opportunity for caching arises with data endpoints. HTML5 provides facilities for the browser to store relational data in a persistent client-side database, which can be used to store the data endpoint results it fetches. Rather than re-fetching the entire data endpoint, it can request from the Sync Kit server only the contents of the endpoint that have changed (for example, the new blog posts that have been written since it last contacted the server.) It can then combine these new results with the records cached in the client-side database and use these combined results to populate the template. We describe how this caching works in more detail in Section 3. Figure 4 compares the traditional server-side form of web hosting (Figure 4a) with a template-caching approach due to Tatsubori and Suzumura [28] (Figure 4b) and with Sync Kit (Figure 4c), which caches both data and templates. 3. SYNCHRONIZATION STRUCTURES The previous section described our approach to caching. The browser catches results and endpoints issue queries to fetch results since the last cache update. Realizing this approach is difficult for two reasons. First, identifying the cached portion of a query requires semantic analysis of schemas and queries to determine the portion of a new query that intersects with previously cached results. Second, rewriting queries to use caches in a way that actually reduces load on the server is a known hard problem in the database community. The main challenge is that the simplest way to reuse a cached result is to rewrite the query to include a complex set of WHERE predicates that exclude all of the tuples in the cache; such predicates slow query execution as they have to be evaluated on each tuple, and often don’t reduce the data the database has to read from disk unless appropriate indices happen to be available. The problem is usually called semantic caching and has been well-studied (e.g., [27, 18, 26, 16, 19], and many others.) Jónsson [26] shows that existing database systems do not perform well when presented with arbitrary, complex queries to retrieve cached results. As a result, most high performance semantic caching systems require modifications to the backend database to keep track of changes since the last access, something we wanted to avoid in Sync Kit. In Sync Kit, we take a simpler approach that requires no modifications to the database: programmers write their data endpoints in terms of data structures that make it easy to determine what has changed since they were last loaded. Sync Kit currently provides two such synchronization structures: queues and sets. We discuss other possible synchronization 3.1 Queues Queues capture the idea that results are ordered on some attribute (say, time) and that this ordering reflects the way the client will access the data. This makes it easy for the client library to fetch new data items that have been created or changed since the last page load by simply sending the maximum (or minimum) value currently in the client-side cache of the queue. This abstraction is particularly useful for synchronizing information that fits the ‘feed’ format, which characterizes a number of websites, including any news website (e.g., nytimes.com or slashdot.org), email sites (e.g., gmail.com), and social networking sites that list updates to friends’ statuses (e.g., facebook.com or twitter.com). To see how queues are programmed in our model, consider the blog endpoint from Figure 1. Suppose the programmer knows that blog entries are always accessed in time order by the template in Figure 2. She can then declare that the blog entry endpoint is a queue, as follows: ``` entry_data = QUEUE(on = "lastModified" table = "entries" order = "DESC" include = "id, author, title, body, lastModified" filter = "published = True" limit = 10) ``` Here we see a queue synchronization object built around the entries table in reverse order by the date field. The queue is limited to 10 items and contains further information, such as which projected fields to include and how to filter the queue inputs (in this case, only messages that have a published variable set to True). The synchronization specification is similar to the SQL APIs offered by web toolkits such as Ruby on Rails [6] and Django [1], but rather than creating a query result set, it defines data endpoint capable of synchronizing the result set over multiple web sessions with a minimal amount of information exchange. The first time the client library loads this endpoint, a SQL query identical to the one shown in Figure 1 will be run. The Sync Kit server will also send the timestamp when these results were fetched from the database, the table schema that the client-side database should maintain, and the parameters which define the queue synchronization structure. Subsequently, whenever the client reloads this endpoint, it provides the maximum lastModified value in its version of the synchronized entry_data endpoint. The server will add a predicate to the WHERE clause of the query to only retrieve data that is more recent than the client’s lastModified value. If few entries have been added to the blog since the last client visit, the server will fetch fewer results, requiring less work if the entries table is properly indexed. The server will also send less data to the client. Upon receipt of new data, the client-side library merges the new results with those already in the browser-side database. Now that the client-side database state has been refreshed with the current data, the template library will run. It will fetch the top 10 blog entries from the client-side database and fill out the cached template with them. Finally, the client can discard entries outside the new top 10 from the browser’s database, as will be discussed in Section 3.5. 3.2 Sets We now turn to Sets, which are another abstraction provided by Sync Kit to perform client-server data synchronization. Sets capture a basket of data that is unordered from the perspective of the client. Each data item in the basket is identified by some key, a role usually served by a primary key in a relational database. Examples of such sets of items are products in a web-based store, movies and actors on the website IMDB, pages on wikis, and the tags used to describe blog posts in the previous example. Sync Kit maintains a notion of two different types of sets. Complete Sets are sets that are actively transferred in their entirety to the client. After synchronizing with a complete set endpoint, the client is guaranteed to have the entirety of the set described. Attributes of members of the set may change, and items can leave or enter a complete set over time. One example of a complete set is the tags in the previous example—the number of tags on a blog is small enough that it can be sent to the client in its entirety. On the other hand, one would not want to transfer the entire set of pages in a large site such as Wikipedia to the client the first time a user requests a single page on the site. Partial Sets contain members that are lazily transferred to the client on a primary key lookup. 3.3 Complete Sets Complete sets are a useful abstraction for relatively small collections of data that see frequent client use but do not fit the access pattern defined by the queue structure. Because access is random and frequent, and the cost of transferring the entire set is low, the client and server coordinate to ensure that the client has a fresh copy of all set items. For example, suppose our programmer from the Queue example would like to add tags to the blog entries view, as in Figure 1. Remember that the tag_data endpoint requires a nested query to its parent endpoint, entry_data. Our complete set of tags is defined by the tags table as well as the entries that the user will see which will require tags: ``` tag_data = SET(type = "complete" table = "tags" parent = [entry_data, "entryid = entry_data.id"] key = "tagid" include = "entryid, tagid, tagstring") ``` This definition follows the one of tag_data in the blog data endpoint of Figure 1. We define the set to be a complete replica of the table tags for any tag involved in an equality join on the entry_data result set. We will request tags by the key tagid, and include the fields entryid, tagid, and tagstring from the table. When a client first queries the data without having synchronized entry_data and tag_data before, the server will construct the query in Figure 1. For subsequent visits, the client will send the entry_data and tag_data requests with a list named tag_data.tagids, containing the tagid values already on the client. The server will construct the query in Figure 1, with an additional WHERE clause predicate indicating that only tags not on the client should be sent: ``` AND tagid NOT IN (tagid1, tagid2, ...) ``` 3.4 Partial Sets Partial sets represent a set of items for which the server does not attempt to maintain a fully synchronized copy on the client. This type of data structure may be used in cases where the set of items is too large to reasonably store on the client-side. Wiki articles are a good example—we would model the corpus articles on Wikipedia as a partial set. A table of pages could be synchronized in the following way: ``` wiki_data = SET(type = "partial" table = "wiki_pages" key = "id" include = "id, title, contents") ``` This definition indicates that the endpoint wiki_data can be maintained by performing id lookups on the server as the client needs them, and that whenever a desired id is requested, the id, title, and contents of the wiki page should be delivered. Whenever client-side logic requires a given wiki page, the wiki_data synchronization set will first look for the page id in the client’s local database. If the id is found, it can be returned to the user. If it is not found, the client can issue an AJAX request for the page to be delivered to the client. 3.5 Eviction and Consistency There are three remaining concerns to make synchronization structures realistic in paralleling the current web experience: keeping the client-side database small, ensuring that the client data mirrors the most up-to-date data on the server, and enforcing endpoint constraints on templates. Because the client can not support a database of the magnitude that the server can, we define an eviction policy for data in the client-side database. A simple LRU eviction policy works reasonably well, with some caveats. For queues, evicted entries can only be those outside of a distance of limit from the maximum lastModified date of items in those queues. For complete sets, any evictions must mark the entire set stale, to ensure that future queries are directed to the server to re-synchronize the data structure. Finally, for partial sets, no special precautions are required—evicted elements will be re-requested from the server. To ensure that the client remains up-to-date with the server, any modifiable entries in the synchronization structures must be noted as such, and a last-modified time or version must be attached to each entry in the underlying server-side table. All requests to an endpoint which contains modifiable items must send a last-access time, which denotes when a client last accessed this endpoint. For results returned by an endpoint during synchronization, only those with modification time or version larger than the client’s last-access time will be returned to the client. Sync Kit currently provides no guarantee that the query executed in a template will be consistent with the endpoint definition. For example, in Figure 2, if the programmer modifies the query to “LIMIT 100,” the client-side cache of the entry_data table is only defined and guaranteed to contain the 10 latest records. The programmer must take care to define the view using the same SQL expression on the server and client. As we discuss in Section 6, we are investigating techniques to automatically generate client-side code from a server-side SQL query, partly to eliminate the possibility of mismatch in these view definitions. 4. PERFORMANCE EVALUATION In this section we compare the performance Sync Kit to our implementation of the Flying Templates [28] approach and to traditional server-side web hosting. In our consideration of the benefits of various approaches, we look at connection throughput, data transferred per request, and the client-side latency of each approach. Our experiments consider two websites we built with Sync Kit: a blog site, in which users revisit the page throughout the day looking for news updates, and a Wiki, where users browse a connected graph of web pages, potentially revisiting some pages over time. We built these websites using content update and hyperlink models from real websites. 4.1 Experimental Environment The server-side programming environment is the Python-based Django [1] 1.1 web framework. We use nginx as the webserver to serve both static content directly and dynamic content over FastCGI to running Django instances. The webserver runs Ubuntu 9.10 (kernel 2.6.31), and has an Intel Core 2 processor with four 2.4GHz cores, 8 MB of L2 cache, and 4 GB RAM. The database, which is on the same machine, is Postgres 8.4.2. For the throughput and data transfer tests, our client machine has a 3.2 Ghz Intel Pentium 4 and 2 GB RAM. Both machines are connected to each other over a local network with link bandwidth 112 MB/s reported by netperf and round trip time 1.516ms to transfer 1 byte of data over HTTP. We also ran in-browser timing tests on a netbook running Microsoft Windows XP and Mozilla Firefox 3.5 over a local network with a round trip time of 3ms to transfer 1 byte of data. For throughput tests, the client gradually increased its request rate using httperf until it identified the point at which the server stopped responding to all requests with HTTP 200/OK. 4.2 Benchmarked Systems In assessing Sync Kit, we compare three systems: Traditional. All template and data processing is performed on the server. Controller logic on the server performs queries against a server-side database, and the results are filled in on a server-side template, which delivers HTML to the client. The process is implemented with standard components in the Django web development framework. Flying Templates. When a user first visits a site, they retrieve a template which is subsequently cached. The template issues AJAX requests to the server, which queries the server-side database and returns results to the client as JSON. The client-side JavaScript then fills in the template with the returned data. Django is used to generate the result set as JSON, and we wrote a custom JavaScript library for filling in the static template. This system is similar to the one described in the work of Tatsubori and Suzumura [28], although the implementation is our own. Sync Kit. When a user first visits a site, they retrieve a template which is subsequently cached. Like Flying Templates, HTML generation from the template is performed on the client-side and data is retrieved from the server. Unlike Flying Templates, the JavaScript library initializes a client-side database using Google Gears [2] in which all data is stored and which is synchronized with the server using the managed data structures described in Section 3. We selected Gears because the HTML5 standard is still in flux, and as of this writing no browser implements both the HTML5 data and caching proposals completely. 4.3 Benchmarks We implemented our blog and wiki websites for the three systems listed above. For both sites, we built a benchmark based on a sample dataset and sample workload using data from real websites. httperf was used to determine the performance of each workload on each of the three systems. Overall, the total number of lines of code written to implement the blog and wiki sites was the same across all three approaches (typically within a few lines of code) outside of the included Sync Kit libraries. This is significant because it suggests that the Sync Kit approach can be made practical from a programming standpoint. 4.3.1 Blog Benchmark Blogs are representative of a queue-heavy workload—when a user visits a blog’s front page, around ten of the most recent stories on the blog are displayed to the user. A user who visits frequently will see some new stories and some older repeated ones. Such experiences occur on sites other than blogs—web search, web-based email, and social networking sites such as Facebook or Twitter are all similar. In order to generate a representative workload, we modeled our benchmark on popular blogs in the wild. We requested the latest RSS feed from several popular blogs, and report their time between posts and post sizes in Table 1. From these, we selected TechCrunch to parameterize a script which loaded a server-side database with three years of randomly generated content, based on a normal distribution of post length \((\mu = 5487, \sigma = 4349)\) and an exponential distribution of time between posts \((\lambda = 53\text{posts/hour})\). We re-use the same template of size 100KB for all three serving strategies. This template consists of basic HTML, CSS, and standard JavaScript libraries, of which SyncKit is a small fraction. All CSS and JavaScript is inlined. We constructed several client workloads for this site to examine its performance for clients who re-visit the site at varying frequencies with respect to the update frequency. For each \(i^{th}\) client workload generated, we modeled users visiting the site over seven days at a rate of \(\lambda\) relative to the mean time between posts for the blog. We vary \(\lambda\) between .008 visits per new post (infrequent visits) and 3.8 visits per new post (frequent visits). For each visit-per-post frequency, we added users until we had generated 10,000 requests. In all cases this resulted in more than 100 users per workload. Testing with a variety of user visit frequencies is useful because it frees our analysis from dependence on the content update frequency that parameterized our blog test data. It is also useful because user visit patterns to a blog tend to be independent of the blog’s popularity [20], so a variety of visit frequencies better reflects real-world workloads. The first time a user visits the site, both Sync Kit and Flying Templates request and cache the template for the site. To model this, we made half of the users new users to the site, causing their first request to include both data and template requests. Varying the fraction of new users to the site did not significantly affect the performance differences between systems. On each visit, the client requests the latest 10 articles. To simulate time, each client sends a \texttt{currenttime} to the server, indicating the time at which the page is requested. For the traditional and Flying Templates approaches, a SQL query of this form is issued on the server-side: \[ \text{SELECT id, author, title, contents, lastModified FROM articles WHERE lastModified < CLIENT_PARAMS['currenttime']} \text{ ORDER BY lastModified DESC LIMIT 10;}\] The following Sync Kit queue manages the client cache: \[ \text{QUEUE(on = "lastModified" table = "articles" order = "DESC" include = "id, author, title, contents, lastModified" limit = 10)}\] In addition to the \texttt{currenttime} argument, the Sync Kit client also sends a \texttt{maxclienttime} parameter to the server, to indicate the results up to the point at which they have synchronized the dataset. The SQL query issued on the server-side is the same as the one above with the following predicate to fetch only results newer than the currently cached ones: \[ \text{AND lastModified > CLIENT_PARAMS['maxclienttime']}\] Table 1: A sample of several popular blogs. Article length is generally an order of magnitude smaller than template size. New articles come out every one to two hours. If a user visits the front page, which displays multiple articles, several times per day, they may see the same article more than once. ### 4.3.2 Wiki Benchmark If blogs are a prototypical representatives of the queue synchronization structure, wikis are good representatives of a set. A wiki (e.g. Wikipedia) can be thought of as a connected graph of primary-keyed data that is too large to send in its entirety to the client. Because of its size, a wiki is synchronized lazily, and thus represents a partial set synchronization pattern. Note that we do not evaluate complete-set synchronization in this paper—these sets are usually small enough to be synchronized in their entirety, or at least as a nested subquery on queues, and we find their performance characteristics less interesting than larger partial sets. To generate the wiki data set, we combined previous studies of content length and link structure, and supplemented these numbers with a random sample of the pages accessed on Wikipedia from publicly available web proxy logs [10]. We then generated 10,000 articles of random content length ($\mu = 3276B, \sigma = 100B$) [9] and title length ($\mu = 22B, \sigma = 12B$) [10] with an average of 23 [8] links per page. To model article popularity, we assigned a probability of $\frac{1}{B}$ to every article, normalized to generate a proper distribution representation hotspots on Wikipedia. Here, lower article numbers are are more popular than higher ones. The +10 prevents the first few articles from dominating all others. This distribution is a good approximation of the actual usage patterns for web resources [15]. Article links were generated such that the source page was selected uniformly and randomly from the page set, and the target page was selected proportional to its assigned popularity probability. Finally, to generate a workload over the wiki, we modeled 40 users visiting the site over the course of 15 days, once per day. Within a visit, each user picks an initial page $i$ according to the pages’ access probabilities, and navigates to linked pages by choosing randomly from the normalized probability distribution of the pages linked from $i$. We assigned the user an exit probability of $.5$ after each view, which would end that day’s visit for the user. Because users visit the site 15 times, we can see how repeated accesses to the same page affect the performance of Sync Kit. The resulting repeated access rate in three generated workloads was 13.7%–14.9%. ### 4.4 Results In this section, we describe the performance results on our two benchmarks. In both benchmarks, we generated three random workloads with the parameters in Section 4.3 and report the average performance of these three. ![Figure 5: User visit frequency vs throughput.](image) ![Figure 6: User visit frequency vs KB per request.](image) In all experiments, the Flying Templates approach provides slightly less than twice the request throughput of the Traditional approach while transferring 100KB less data per request as it avoids transferring the template. This result is similar to that shown in [28] on a different query workload. Flying Templates sees a drop in throughput between $\lambda_i = 0$ and $\lambda_i = .5$. This is an artifact of our experiment design, as less frequent visitors see a larger share of their traffic come from static templates which are faster for the web server to serve. For infrequently visiting clients, Flying Templates and Sync Kit perform about the same, as Sync Kit is able to cache the template from one visit to the next, but there is a very low probability of any article still being in Sync Kit’s data cache on the next page load. For clients who revisit more frequently, however, Sync Kit’s cache helps dramatically. At the extreme, Sync Kit is able to serve a factor of four (464 vs. 116) more requests per second than the traditional approach, and nearly a factor of three more than the Flying Templates approach. It also requires around 13.2% the data transfer of Flying Templates, and around 5.4% the data transfer of Traditional. We now look at the latency from the standpoint of a client of the system. We ran client-side requests from a netbook on the same network as the server with connection properties described above and $\lambda_i = .31$. The results are shown in Figure 7; on the X axis are the three systems with the total height of each bar representing the average latency to load a page in each system. All three approaches have approximately the same client latency (around 400 ms/request). Note that Sync Kit improves server-side performance without hurting the client’s experience. ![Figure 7: Client latency for blog workload ($\lambda_i = .31$).](image) To understand how this latency breaks down, we now look at the components of the bars in a bit more detail. Looking at the “server” component, it is clear that Sync Kit does substantially reduce the total time spent waiting for data from the server—from 93 ms in the traditional case to 45 ms in Sync Kit. However, Sync Kit spends an additional 38 ms loading data into the client database, and 61 ms populating the template. Note that in all three scenarios, “DOM Load,” which represents the time to load the DOM of a page into the browser, dominates the client’s latency. To measure DOM Load time, we loaded the page into the browser cache and measured the time until the “onLoad” JavaScript event. All three systems also incur a negligible 3 ms network round trip overhead. Flying Templates also performs similarly; it sends more time waiting for data from the server than Sync Kit, but does not have to populate the client-side database. ### 4.4.2 Wiki Benchmark We ran the same experiments from the blog benchmark with our set-based wiki benchmark. Figure 8 shows the throughput (top) and mean kilobytes per request (bottom) for the wiki experiment. Since there is no varying visitation rate, we didn’t vary the time parameter in this experiment, and so these are bar charts rather than line charts. From these results, it is evident that Sync Kit still has a large benefit over the Traditional approach both in terms of a severe data transfer reduction and in terms of increased throughput. Sync Kit offers slightly better throughput and slightly decreased bandwidth relative to Flying Templates due to the 14% cache hit rate for revisited wiki pages per user. This signals that improved performance through prefetching might result in better performance, but the ultimate performance benefit will depend on the predictability of client browsing. ![Figure 8: Server throughput (top) and data transfer per request (bottom) for the wiki benchmark.](image) Figure 9 shows the latency results from a client perspective (again measured from a home computer) for the three systems. The results are similar to those shown in Figure 7: overall, the differences in latencies between the three systems are small; Sync Kit spends a little more time than Flying Templates on the more complex queries it runs to send its state to the server, but the difference is is negligible. Again, the total time is dominated by DOM Load. ### 5. RELATED WORK In the FlyingTemplates [28] system, HTML templates are cached on and populated locally by the client. Templates are sent to the client, where they are cached using the browser’s native mechanisms. On page load, a JavaScript library attached to each web page queries the server for the appropriate data and combines the data with the page template. The authors show that this techniques yields up to a 2x throughput improvement in applications where the HTML is substantially larger than the raw data used to populate a web page. Sync Kit offers the same benefits as Flying Templates, but also is able to cache the data behind a template, in addition to just caching the template. We compared explicitly to this approach in Section 4. The Hilda [32, 31, 24] system executes both data operations and template operations on the client side within the context of a single web browsing session. It samples server log files to determine which templates and data elements are most likely to be accessed within a user’s session, and then preemptively sends those portions of the web application to the client. When browsing the site, pre-fetched portions of the web application can be accessed without contacting the server. Unlike Sync Kit, Hilda requires developers to build their entire web application in an unfamiliar declarative language; this is what enables the system to move computation and data from client to server. Hilda does not consider data persistence on the client. Orchestra [17] performs similar partitioning of applications written in Java into a server-side and a client-side component. Ganesh [29] is a caching system for dynamic database data that, rather than exploiting specific properties of application data structures, uses cryptographic hashing to identify portions of query results similar to results that have already been returned and reuses them. This approach has the advantage that it is transparent to the application developer, but does not exploit in-client caches as we do. There has been considerable work on client-side caching in database systems [30, 22, 21], but this work has typically assumed that there is a stand-alone database application running on the client, rather than pushing caching into the browser. In this work, it is assumed that these client-side applications can interface with the backend database below the SQL layer, specifying exactly the tuples or ranges of records that they have in their cache to ensure consistency. Other database caching approaches include the Ferdinand system [23], which uses a proxy to cache database results. A proxy-based approach has the advantage that it can cache data for many users, but introduces privacy concerns and still requires users to go to a server on the Internet to retrieve cached data. Other database caching techniques—such as DBCache [12], DBProxy [13], and memcached [4]—typically also focus on server-side caches that reduce load on the database but do not have substantial effects on bandwidth and do not push rendering to the client. Conventional web caching systems (e.g., proxy caches [7]) are similar: they offer the advantage that they work for many clients, but they still require the server to expend bandwidth to transfer data to clients. They are also tricky to get to work for dynamic content. Similarly, browsers locally cache static content, but such caches are not effective for highly dynamic web pages of the sort we consider. As discussed in Section 3, the technique we employ for determining if a result is in the client-side cache is loosely inspired by work in the database community on semantic caching (e.g., [27, 18, 26, 16, 19]). The primary difference is that we constrain the programmer to access data through a set of synchronization data structures that we have devised, allowing us to efficiently determine if a result from the database is available in the cache. Most relevantly, Chidlovskii and Borghoff [16] observe that web applications are characterized by simple queries and as such are amenable to semantic caching, similar to our observation that a few simple data structures are sufficient to allow caching for many web-backed data intensive applications. 6. CONCLUSIONS AND FUTURE WORK In this paper, we introduced Sync Kit, a toolkit for making it easy for developers to take advantage of client-side relational databases that will be introduced with HTML5-compliant browsers. Sync Kit uses a simple programming model where users define data endpoints that cache database objects on the server and templates that describe how to render web pages in terms of these endpoints. This approach requires no browser modifications and is implemented as a simple Python- and JavaScript-based library. When an endpoint is accessed, its contents are cached in the server-side database and can be reused the next time a template that accesses the endpoint is loaded. Templates are also cached on the client. To ensure that endpoints are kept consistent with the backend database, endpoints can be declared to be sets or queues, which enables Sync Kit to run efficient SQL queries that identify changes in endpoints since they were added to the cache. Our experiments show that when cache hit rates are high (as with our blog benchmark), the Sync Kit approach performs well—approximately a factor of four better than the traditional approach and a factor of three better than the Flying Templates [28] approach. We also showed that client-side rendering does not negatively impact client-side performance, despite extensive use of JavaScript and the overheads of client-side database access. In short, Sync Kit offers significant performance benefits for data intensive web sites. Looking forward, there are several ways to extend our work. One direction is to increase the breadth of the synchronization patterns Sync Kit supports. For example, aggregation is an expensive server-side operation that may in some cases be offloaded to the client—one can imagine delivering a compressed data cube [25] to the client in cases where aggregation is frequent. We are also exploring ways to improve the performance of our current synchronization structures. While partial sets can not be completely replicated on the client-side, some prefetching techniques can be employed to return frequently co-accessed results that may satisfy future queries and reduce client-side latency. Instead of forcing programmers to define their own synchronization structures, we are also working on a query workload analyzer to generate or recommend such structures. 7. ACKNOWLEDGMENTS We thank the anonymous reviewers for their improvements. We also thank David Huynh for his thoughts on improving the performance of client-side database workloads. Our work was supported by NSF and NDSEG fellowships, and the NSF under grant number IIS-0448124. 8. REFERENCES
{"Source-Url": "https://dspace.mit.edu/bitstream/handle/1721.1/63112/Madden_Sync%20kit.pdf;jsessionid=5F7F1096FB402721CC31C874A2A2CB7D?sequence=1", "len_cl100k_base": 10281, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36055, "total-output-tokens": 12758, "length": "2e13", "weborganizer": {"__label__adult": 0.0002658367156982422, "__label__art_design": 0.00034928321838378906, "__label__crime_law": 0.00020778179168701172, "__label__education_jobs": 0.0007371902465820312, "__label__entertainment": 7.557868957519531e-05, "__label__fashion_beauty": 0.00013267993927001953, "__label__finance_business": 0.0002713203430175781, "__label__food_dining": 0.0002651214599609375, "__label__games": 0.0003516674041748047, "__label__hardware": 0.0009169578552246094, "__label__health": 0.0004456043243408203, "__label__history": 0.00021958351135253904, "__label__home_hobbies": 7.43865966796875e-05, "__label__industrial": 0.0002651214599609375, "__label__literature": 0.0001952648162841797, "__label__politics": 0.0001424551010131836, "__label__religion": 0.0003235340118408203, "__label__science_tech": 0.02764892578125, "__label__social_life": 6.711483001708984e-05, "__label__software": 0.01434326171875, "__label__software_dev": 0.9521484375, "__label__sports_fitness": 0.0001807212829589844, "__label__transportation": 0.0003767013549804687, "__label__travel": 0.00019109249114990232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55480, 0.02531]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55480, 0.34297]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55480, 0.87908]], "google_gemma-3-12b-it_contains_pii": [[0, 1099, false], [1099, 6354, null], [6354, 12245, null], [12245, 18863, null], [18863, 22815, null], [22815, 29302, null], [29302, 36108, null], [36108, 39312, null], [39312, 43875, null], [43875, 50059, null], [50059, 55480, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1099, true], [1099, 6354, null], [6354, 12245, null], [12245, 18863, null], [18863, 22815, null], [22815, 29302, null], [29302, 36108, null], [36108, 39312, null], [39312, 43875, null], [43875, 50059, null], [50059, 55480, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55480, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55480, null]], "pdf_page_numbers": [[0, 1099, 1], [1099, 6354, 2], [6354, 12245, 3], [12245, 18863, 4], [18863, 22815, 5], [22815, 29302, 6], [29302, 36108, 7], [36108, 39312, 8], [39312, 43875, 9], [43875, 50059, 10], [50059, 55480, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55480, 0.0332]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
b5e09e1840f9b176db41658539a3fbf92bb33022
Faster range minimum queries Tomasz Kowalski, Szymon Grabowski † Lodz University of Technology, Institute of Applied Computer Science, Al. Politechniki 11, 90–924 Łódź, Poland, {tkowals|sgrabow}@kis.p.lodz.pl Abstract. Range Minimum Query (RMQ) is an important building brick of many compressed data structures and string matching algorithms. Although this problem is essentially solved in theory, with sophisticated data structures allowing for constant time queries, practical performance and construction time also matter. Additionally, there are offline scenarios in which the number of queries, q, is rather small and given beforehand, which encourages to use a simpler approach. In this work, we present a simple data structure, with very fast construction, which allows to handle queries in constant time on average. This algorithm, however, requires access to the input data during queries (which is not the case of sophisticated RMQ solutions). We subsequently refine our technique, combining it with one of the existing succinct solutions with O(1) worst-case time queries and no access to the input array. The resulting hybrid is still a memory frugal data structure, spending usually up to about 3n bits, and providing competitive query times, especially for wide ranges. We also show how to make our baseline data structure more compact. Experimental results demonstrate that the proposed BbST (Block-based Sparse Table) variants are competitive to existing solutions, also in the offline scenario. Key words: string algorithms, range minimum query, bulk queries 1 Introduction The Range Minimum Query (RMQ) problem is to preprocess an array in a way allow- ing to return the position of the minimum element for an arbitrary input interval, speci-\ed by a pair of indices, in an efficient manner. More formally, for an array \( A[1...n] \) of objects from a totally ordered universe and two indices \( i \) and \( j \) such that \( 1 \leq i \leq j \leq n \), the range minimum query \( \text{RMQ}_A(i,j) \) returns \( \min_{i \leq k \leq j} A[k] \), which is the position of a minimum element in \( A[i...j] \). One may alternatively require the position of the leftmost minimum element, i.e., resolve ties in favour of the leftmost such element, but this version of the problem is not widely accepted. In the following considerations we will assume that \( A \) contains integers from the universe \( U = \{1, 2, \ldots, n\} \), of \( \log_2 n \) bits each. This innocent-looking little problem has quite a rich and vivid history and perhaps even more important applications, in compressed data structures in general, and in text processing in particular. Solutions for RMQ which are efficient in both query time and preprocessing space and time are building blocks in such succinct data structures as, e.g., suffix trees, two-dimensional grids or ordinal trees. They have applications in string mining, document retrieval, bioinformatics, Lempel-Ziv parsing, etc. For references to these applications, see [6,5]. The RMQ problem history is related to the LCA (lowest common ancestor) problem defined for ordinal trees: given nodes \( u \) and \( v \), return \( \text{LCA}(u, v) \), which is the lowest node being an ancestor of both \( u \) and \( v \). Actually, the RMQ problem is linearly equivalent to the LCA problem [84], by which we mean that both problems can be transformed into each other in time linearly proportional to the size of the input. It is relatively easy to notice that if the depths of all nodes of tree $T$ visited during an Euler tour over the tree are written to array $A$, then finding the LCA of nodes $u$ and $v$ is equivalent to finding the minimum in the range of $A$ spanned between the first visits to $u$ and $v$ during the Euler tour (cf. [3, Observation 4]). Harel and Tarjan [13] were the first to give $O(n)$-time tree preprocessing allowing to answer LCA queries in constant time. The preprocessing required $O(n)$ words of space. Bender and Farach [4] presented a significantly simpler algorithm with the same time and space complexity. Further efforts were focused on reducing the space of the LCA/RMQ solution, e.g., Sadakane [16] showed that LCAs on a tree of $n$ nodes can be handled in constant time using only $2n + o(n)$ bits. A crowning achievement in this line of research was the algorithm of Fischer and Heun [6], who showed that RMQs on $A$ can be transformed into LCA queries on the succinct tree, and this leads to an RMQ solution that also uses $2n + o(n)$ bits and (interestingly) does not access $A$ at query time. This result essentially matches the information-theoretic lower bound for an RMQ solution not accessing the input array, which is $2n - \Theta(\log n)$ bits. Any scheme for RMQs allows to reconstruct the Cartesian tree [6, Section 2.2] of the input array by iteratively querying the scheme for the minimum; the number of bits to describe any possible Cartesian tree here is $2n - \Theta(\log n)$ [14,6], hence the bound. The Fischer and Heun solution, although allowing for constant time RMQ queries, is not so efficient in practice: handling one query takes several microseconds (see [5]). Some ingenious algorithmic engineering techniques, by Grossi and Ottaviano [12], Ferrada and Navarro [5], and Baumstark et al. [3], were proposed to reduce this time, and the fastest implementation [3] achieves around 1\(\mu\)s per query (timings vary depending on query parameters) on an single core of the Intel Xeon E5-4640 CPU. Recently, Alzamel et al. [2] (implicitly) posed an interesting question: why should we use any of these sophisticated data structures for RMQ when the number of queries is relatively small and building the index (even in linear time, but with a large constant) and answering then the queries (even in constant time each, but again with a large constant) may not amortize? A separate, but also important point is that if we can replace a heavy tool with a simpler substitute (even if of limited applicability), new ideas may percolate from academia to software industry. Of course, if the queries $[\ell_i, r_i]$ are given one by one, we cannot answer them faster than in the trivial $O(r_i - \ell_i + 1) = O(n)$ time for each, but the problem becomes interesting if they are known beforehand. The scenario is thus offline (we can also speak about batched queries or bulk queries). Batched range minima (and batched LCA queries) have applications in string mining [7], text indexing and various non-standard pattern matching problems, for details see [2, Section 5]. In this paper we first present a heuristical idea for RMQ computation (without a constant-time guarantee). This idea is very simple, the corresponding data structure very fast to build (as opposed to any other RMQ algorithm we are aware of) and it answers range minimum queries faster on average than competitive algorithms, except perhaps on narrow intervals. Then, a hybrid of our solution with most efficient constant-time RMQs is presented, with usually less than $3n$ bits of space and no need to access $A$. In this way we boost the average performance of constant-time solutions without sacrificing much in the space usage. Ideas for making our data structure compact are discussed in a separate section. We also discuss the scenario of running batched range minima (relevant when the number of queries is significantly smaller than the input array size), to which we also adapt our idea. The roadmap of the paper is as follows. In the next section we present our block- based approach to the standard (online) RMQ problem. By “online RMQ” we mean the scenario in which the data structure is built first, to handle any number of queries to follow. In several subsections, we present a plain block-based sparse table (BbST) algorithm, a hybrid with a theoretical solution, a two-level BbST representation, and a compact representation. Section 3 deals with the offline RMQ problem variant. Here the number of input queries is expected to be small compared to the input array size, and for this reason building a costly data structure may be an overkill. In this scenario we measure (both in complexity terms and in experiments) the time and space to handle q queries. The subsections of Section 3 present the first (and only so far) algorithm for offline RMQ, by Alzamel et al., and our adaptation of BbST to this setting. Section 4 contains experimental results. The last section concludes. We use a standard notation in the paper. All logarithms are of base 2. The space usage is sometimes expressed in words (of \(\log_2 n\) bits), sometimes in bits, whichever more convenient, and we are explicit about those units. A (very) preliminary version of our paper was presented in Proc. PSC 2017 [10]. 2 Our algorithms Before presenting our algorithms, we must remind the classic idea of the Sparse Table (ST) [4], as a point of departure. Given an array \(A\) of size \(n\), we compute and store the minima for all its subarrays of size being a power of two. Let \(M_{i,j}\) denote the position of the minimum of the subarray \(A[i \ldots i + 2^j - 1]\) (observe that for \(j = 0\) we have a subarray with one element). Figure 1 illustrates. Any interval in \(A\), with its boundaries denoted by \([\ell, r]\), can be covered by a pair of such subarrays, for example \(A[2 \ldots 8]\) is covered with \(A[2 \ldots 5]\) and \(A[5 \ldots 8]\). Finding the position of the minimum in this interval, that is, returning \(\text{RMQ}_A(\ell, r)\), boils down to reading two precomputed minima and returning the position of the minimum of these two values. In our example, \(\min(A[2 \ldots 8])\) is equal to \(M_{2,2}\) if \(A[M_{2,2}] \leq A[M_{5,2}]\), or equal to \(M_{5,2}\) otherwise. The space used by ST is \(O(n \log n)\) words. In the following subsections, first we present our block-based sparse table idea, which competes practically (although not in the worst case) with existing RMQ al- gorithms. This algorithm, however, requires access to \(A\). Then we propose a hybrid algorithm, improving the worst-case time and also allowing to get rid of \(A\). 2.1 Block-based Sparse Table The first algorithm we present can be considered as a generalization of ST for blocks (with some twist). In the construction, array \(A\) is divided into equal blocks of size \(k\) and for each block \(B_i\) (where \(i = 1, 2, \ldots\)) we find and store the positions of \(O(\log(n/k))\) minima, where \(j\)th value \((j = 1, 2, \ldots)\) is the minimum of \(A[(i - 1)k + 1 \ldots (i - 1)k + 2^{j-1}k]\), i.e., the minimum over a span of \(2^{j-1}\) blocks, where the leftmost block is \(B_i\). The required space is \(O((n/k) \log(n/k))\) words. Figure 1. A prefix of the input array $A$, together with several precomputed Sparse Table values. Each $M_{i,j}$ stores the position of the minimum in $A[i \ldots i + 2^j - 1]$. For example, $M_{5,2} = 7$, since the minimum of $A[5 \ldots 8]$, which is 2, is located in $A[7]$ (ties are resolved arbitrarily). To answer a query $[\ell, r]$, we find $m'$, the position of the minimum over the smallest span of blocks *fully including the query* using the technique of the sparse table. If $\ell \leq m' \leq r$, then $m'$ is the answer and it is returned. In the opposite (rare) case, we continue with finding the position $m''$ of the minimum over the largest span of blocks *fully included in the query*, and then scan (at most) $O(k)$ cells of $A$ to find the true minimum and return its position. Figure 2 reveals more details concerning this operation. The query starts somewhere in block $B_3$ and ends in block $B_8$. The left of the two intervals covering the range of blocks $B_3 \ldots B_8$ spans over $B_3 \ldots B_6$. Each $M'_{i,j}$ ($j \geq 0$) stores the position of the minimum of the subarray of $A$ covering the blocks $B_i \ldots B_{i+2^j-1}$. In our example, the position of the minimum for the range of $B_3 \ldots B_6$ is stored in $M'_{3,2}$. The value of this minimum is 5 and it belongs to $B_4$ (which is denoted as $M_{5,2} = M'_{8,0}$). As block $B_4$ is wholly contained in our query, there is no need to scan any part of a block. The situation is different for the right of the two intervals covering $B_3 \ldots B_8$, namely $B_5 \ldots B_8$. Here the minimum belongs to $B_8$ (i.e., $M'_{5,2} = M'_{8,0}$) and moreover, it happens that it is located beyond the prefix of this block covered by the query, which is shown in the figure with an arrow. In this case, the prefix of $B_8$ must be scanned. At the end, one extra comparison returns the RMQ value. If the average query interval width is $u$, the probability that $m'$ belongs to the union of the two blocks containing $\ell$ and $r$, respectively, is $O(k/u)$. The average query time complexity is thus $O((k/u) \times k + (1 - k/u) \times 1) = O(k^2/u)$, which is constant for $u = O(k^2)$. \footnote{In the earlier (conference) version of our work \cite{10} this algorithm was slightly different: we started from the largest span of blocks *fully included in the query*. Although the space usages and time complexities of both variants are the same, the new one is practically faster by about 20 percent for large intervals, while the speed for small intervals is more or less the same.} Faster range minimum queries \[ B_i = A_i \ldots A_{i+3k-1} \] \[ M'_{i,0} = \text{RMQ}_{B_i} \] \[ V_j = A[M'_{i,0}] \] Figure 2. A prefix of the input array \( A \) divided into blocks \( B_i \). The positions of the block minima are stored in \( M'_{i,0} \). The corresponding minima values are in \( V_i \). Sparse Table with higher arity. Let us now consider a generalization of the doubling technique in Sparse Table (a variant that we have not implemented). Instead of using powers of 2 in the formula \( A[(i−1)k+1\ldots (i−1)k+2^j−1k] \), we use powers of an arbitrary integer \( \ell \geq 2 \) (in a real implementation it is convenient to assume that \( \ell \) is a power of 2, e.g., \( \ell = 16 \)). Then, the minimum over a range will be calculated as a minimum over \( \ell \) precomputed values. The worst-case query time becomes \( O(\ell + k) \), but the space gets reduced by a factor of \( \log \ell \). We will come back to this variant in Section 3.2. 2.2 A hybrid algorithm The algorithm presented in the previous subsection has two drawbacks. One is the worst-case query time of \( O(k) \), rather than \( O(1) \). The other is that it requires access to array \( A \), which typically occupies around \( n \log n \) bits. We present now a hybrid of our technique with any existing algorithm with no access to \( A \) at query time and constant time queries. Such solutions, e.g., Baumstark et al. \( [3] \), may use \( 2n + o(n) \) bits of space. The hybrid builds both a data structure from Baumstark et al. and a block-based sparse table, storing however both the minimum positions and their values in the latter component. Note that the plain BbST does not store the minimum values since \( A \) is available there. The queries in our hybrid are handled as follows. First the BbST component tries to answer the query. Using the sparse table requires comparing values of two retrieved minima and this is when we would have to refer to \( A \), but in the modified BbST we access the stored values (there are only \( O((n/k) \log (n/k)) \) of them in total, not \( O(n \log n) \)). If, however, we are unlucky and in the plain BbST we would have to scan at most two blocks, we switch to the component with \( O(1) \) time. To sum up, our solution speeds up the queries performed by Baumstark et al. in many practical cases, preserves the constant worst case time and increases the space usage only moderately (to less than \( 3n \), as we will see in the experimental section). The last property, compact space, requires however an appropriate representation of the BbST component (and well chosen parameters), which is described in Section 2.4. 2.3 Two-level block-based Sparse Table We come back to our basic variant, from Section 2.1, and show how to generalize this procedure to two levels of blocks. The idea is to compute minima for \( n/k_2 \) non-overlapping blocks of size \( k_2 \) and then apply the doubling technique from Sparse Table on larger blocks, of size \( k_1 \). We assume that \( k_2 \) divides \( k_1 \). The first construction stage, finding the minima for blocks of size \( k_2 \), takes \( O(n) \) time. The second stage, working on blocks of size \( k_1 \), takes \( O(n/k_2 + (n/k_1) \log(n/k_1)) \) time. Then we answer the queries; if we are unlucky and one or two blocks of size \( k_1 \) have to be scanned, the procedure is sped up with aid of the precomputed minima for the blocks of size \( k_2 \). Here we assume that the queries are sampled uniformly randomly over the whole input array, i.e., the average query width is \( O(n) \). A query is thus answered in \( O(k_1/k_2 + k_2) \) time in the worst case and in \( O(1) \) time on average if \( (k_1/n) \times (k_1/k_2 + k_2) = O(1) \). The condition on the average case becomes clear when we notice that the probability of the unlucky case is, under the given assumption, \( \Theta(k_1/n) \) and checking (up to) two blocks takes \( O(k_1/k_2 + k_2) \) time. Fulfilling the given condition implies that \( k_1k_2 = O(n) \) and \( k_1/k_2 = O(n/k_1) \). Our goal is to find such \( k_1 \) and \( k_2 \) that the extra space is minimized but the average constant time preserved. To this end, we set \( k_1 = \sqrt{n \log n} \), \( k_2 = \sqrt{n/\log n} \), and for these values the average time becomes \( O(1) \). The space is \( O(n/k_2 + (n/k_1) \log(n/k_1)) = O(\sqrt{n \log n}) \) words. Note that we preserved the average time of the variant from Section 2.1 and reduced the extra space by a factor of \( \log^{1/2} n \). Note also that the space complexity cannot be reduced for any other pair of \( k_1 \) and \( k_2 \) such that \( k_1k_2 = O(n) \). It is quite easy to notice that generalizing the presented scheme to multiple levels does not help, i.e., it is impossible to obtain both \( O(1) \) average query time and \( o(\sqrt{n \log n}) \) words of space. Indeed, let us have \( h \geq 2 \) levels and choose the parameters \( k_1 > \ldots > k_h \), such that each \( k_i+1 \) divides \( k_i \). The minima for non-overlapping blocks of size \( k_i, i = h, h - 1, \ldots, 2 \), are first computed, and then also the minima for blocks of size \( k_1 \), their doubles, quadruples, and so on. The constant average time for query answering now requires that \( (k_1/n) \times (k_1/k_2 + k_2/k_3 + \ldots + k_{h-1}/k_h + k_h) = O(1) \). The second factor on the left-hand side is \( \Omega(k_h) \), hence the condition implies that \( k_1k_h = O(n) \) (which is analogous to the condition required for the two-level variant). As the space is \( \Theta((n/k_1) \log(n/k_1) + n/k_2 + n/k_3 + \ldots + n/k_h) = \Omega((n/k_1) \log(n/k_1) + n/k_h) \), it is minimized for \( k_1 = \Theta(\sqrt{n \log n}) \) and \( k_2 = \Theta(\sqrt{n/\log n}) \), which gives \( \Omega(\sqrt{n \log n}) \) words of space, not better than for the case of \( h = 2 \). We implemented the two-level variant, as will be seen in the experimental section. In the standard (non-compact) version we have \( k_2 \leq 256 \) and thus the respective minimum positions are stored on one byte each. 2.4 Compacting BbST In the (block-based) sparse array in each block we store multiple minimum positions: for a span of one block, a span of two blocks, a span of four blocks, and so on. Let us denote the (conceptual) array containing the minimum positions for all spans over $2^j$ blocks with the $j$th layer, where $0 \leq j \leq \lceil \log n \rceil$. If we store the minimum positions na"ively in $\log n$ bits, the total size of our data structure is $O((n/k) \log(n/k) \log n)$ bits. Pointing to a minimum in $j$th layer, however, requires fewer bits: $\log k$ bits in 0th layer and $j$ (extra) bits in $j$th layer for $j > 0$. We simply point to a block containing a minimum rather than to its exact position, except for the lowest layer. Figure 3 presents the relation between $M'_{i,j}$, the position of the minimum of the span of blocks $B_i \ldots B_{i+2^j-1}$, and $\Delta_{i,j}$ which stores $i$th value for $j$th layer. The relation is simple: $M'_{i,j} = M'_{i,0} + \Delta_{i,j}$. In this way, we can reduce the overall space to $O((n/k)(\log k + \log n) + (n/k) \log^2(n/k))$ bits. In the real implementation, however, we store the minimum positions every 9th layer directly (using $\log n$ bits) and in the remaining layers use 8 bits, i.e., 1 byte for a reference. This is a convenient tradeoff between memory use and byte-aligned access to data. We can do better though, for the price of more costly access to the minima. To this end, each $\Delta_{i,j}$, $j > 0$, can be encoded on one bit, with reference to $\Delta_{i,j-1}$, and the total space use is then $O((n/k)(\log k + \log(n/k)))$ bits. We admit that our data structure resembles the tournament tree and its compact version, the navigation pile [15]. The tournament tree is form of a min-heap, in which every leaf represents a player and every internal node stores a copy of the winner. In the navigation pile there is no redundancy, only single bits telling the winner, with a primary motivation to reduce cache misses, e.g., in priority queue operations. There is one more aspect concerning the required space. Let us consider a hybrid of an $O(1)$-time RMQ algorithm with our two-level BbST variant (which, as experiments will show, is an attractive solution). Apart from the minimum positions for each block of size $k_2$ we also need to store its value. In order not to spend $\log n$ bits for such values, we apply a quantization heuristic. The smallest and the largest minimum among the minima for blocks of size $k_2$ are converted to 0 and $\max Q$, respectively. The other minimum values are quantized more ‘densely’ for smaller and less densely for larger values (as, assuming a uniformly random distribution of the input data in $A$, minima for blocks tend to be closer to the global minimum than to the maximum among those minima for blocks). 3 Offline Range Minimum Queries If the queries to the array are known beforehand and their number $q$ is limited, resigning from heavy RMQ machinery in favour of much simpler solutions is not only more natural, but may also prove faster and memory frugal. In the first subsection below, we present the only known so far solution to this scenario, from Alzamel et al. [2], while in the next subsections we show how to adapt our block-based sparse table to offline queries. 3.1 The Alzamel et al. algorithm Following [1] (see the proof of Lemma 2), the Alzamel et al. approach starts from contracting the array $A$ into $O(q)$ entries. The key observation is that if no query starts or ends with an index $i$ and $i + 1$, then, if $A[i] \neq A[i + 1]$, $\max(A[i], A[i + 1])$ will not be the answer to any of the queries from the batch. This can be generalized into continuous regions of $A$. Alzamel et al. mark the elements of $A$ which are either a left or a right endpoint of any query and create a new array $A_Q$: for each marked position in $A$ its original value is copied into $A_Q$, while each maximal block in $A$ that does not contain a marked position is replaced by a single entry, its minimum. The relative order of the elements copied from $A$ is preserved in $A_Q$, that is, in $A_Q$ the marked elements are interweaved with representatives of non-marked regions between them. As each of $q$ queries is a pair of endpoints, $A_Q$ contains up to $4q + 1$ elements (repeating endpoint positions imply a smaller size of $A_Q$, but for relative small batches of random queries this effect is rather negligible). In an auxiliary array the function mapping from the indices of $A_Q$ into the original positions in $A$ is also kept. For the contracted data, three procedures are proposed. Two of them, one offline and one online, are based on existing RMQ/LCA algorithms with linear preprocessing costs and constant time queries. Their practical performance is not competitive though. The more interesting variant, $\text{ST-RMQ}_{\text{CON}}$, achieves $O(n + q \log q)$ time\(^2\). The required space (for all variants), on top of the input array $A$ and the list of queries $Q$, is claimed to be $O(q)$, but a more careful look into the algorithm (and the published code) reveals that in the implementation of the contracting step the top bits of the entries of $A$ are used for marking. There is nothing wrong in such a bit-stealing technique, from a practical point\(^3\), but those top bits may not always be available and thus in theory the space should be expressed as $O(q)$ words plus $O(n)$ bits. We come back to the $\text{ST-RMQ}_{\text{CON}}$ algorithm. As the name suggests, it builds the Sparse Table structure for the contracted array. All the queries can be answered in $O(q)$ time. Interestingly, the construction and the queries are performed together,\(^2\) \(^2\) Written consistently as $n + O(q \log q)$ in the cited work, to stress that the constant associated with scanning the original array $A$ is low. \(^3\) One of the authors of the current work also practiced it in a variant of the SamSAMi full-text index [11, Section 2.3]. Faster range minimum queries with re-use of the array storing the minima. The ST construction time is $O(q \log q)$, but due to this clever trick, the size of the helper array is not $O(q \log q)$, but only $O(q)$. 3.2 Block-based Sparse Table for the offline RMQ BbST with the input array contraction On a high level, our first algorithm for the offline RMQ consists of the following four steps: 1. Sort the queries and remap them with respect to the contracted array’s indices (to be obtained in step 2). 2. Contract $A$ to obtain $A_Q$ of size $O(q)$ (integers). 3. Build the block-based sparse table on $A_Q$ (see Section 2.1), with blocks of size $k$. 4. Answer the queries, again in the manner of the solution from Section 2.1. In the following paragraphs we are going to describe those steps in more detail, also pointing out the differences between our solution and Alzamel et al.’s one. 1) Sorting/remapping queries. Each of the $2q$ query endpoints is represented as a pair of 32-bit integers: its value (position in $A$) and its index in the query list $Q$. The former 4-byte part is the key for the sort while the latter 4 bytes are satellite data. 2) Creating $A_Q$. Our contracted array $A_Q$ contains the minima of all areas $A[E_r^i \ldots E_{r+1}^i]$, in order of growing $i$. $A_Q$ in our implementation contains thus (up to) $2q - 1$ entries, twice less than in Alzamel et al.’s solution. Like in the preceding solution, we also keep a helper array mapping from the indices of $A_Q$ into the original positions in $A$. 3) Sparse Table on blocks. Here we basically follow Alzamel et al. in their ST-RMQ$\text{CON}$ variant, with the only difference that we work on blocks rather than individual elements of $A_Q$. For this reason, this step takes $O(q + (q/k) \log(q/k)) = O(q(1 + \log(q/k))/k)$ time and $O((q/k) \log(q/k))$ space. The default value of $k$, used in the experiments, is 512. 4) Answering queries. The speculative reads in the block-based sparse table (cf. Section 2.1) allow to answer a query often in constant time (yet, in rare cases an $O(k)$-time scan is needed). This simple idea is crucial for the overall performance of our scheme. In the worst case, we spend $O(k)$ time per query here, but on average, assuming uniformly random queries over $A$, the time is $O((k/q) \times k + (1 - k/q) \times 1) = O(1 + k^2/q)$, which is $O(1)$ for $k = O(\sqrt{q})$. Let us sum up the time (for a serial implementation) and space costs. A scan over array $A$ is performed once, in $O(n)$ time. The radix sort applied to our data of $2q$ integers from \{1, ..., $n$\} takes (in theory) $O(q \max(\log n / \log q, 1))$ time. Alternatively, introsort from C++ standard library (i.e., the std::sort function) would yield $O(q \log q)$ time. To simplify notation, the Sort($q$) term will further be used to denote 4 https://github.com/voutcn/kxsort the time to sort the queries and we also introduce \( q' = q/k \). \( A_Q \) is created in \( O(q) \) time. Building the Sparse Table on blocks adds \( O(q + q' \log q') \) time. Finally, answering queries requires \( O(qk) \) time in the worst case and \( O(q + k^2) \) time on average. In total, we have \( O(n + \text{Sort}(q) + q' \log q' + qk) \) time in the worst case. The extra space is \( O(q' \log q') \). In Section 2.1 we presented a variant of the Sparse Table with arity higher than two for the online RMQ problem. Now we discuss it in the context of the offline RMQ. The worst-case time of handling \( q \) queries becomes \( O(n + \text{Sort}(q) + q' \log q' / \log q' + qk) \), which is minimized for \( \ell = \max(\log q' / (k \log q'), 2) \). With \( k \) small enough to have \( \ell = \log q' (k \log q') \), we obtain \( O(n + \text{Sort}(q) + q' \log q' / \log q' + qk) \) overall time and the required extra space is \( O(q' \log q' / \log q') \) words. If we focus on the average case, where the last additive term of the worst-case time turns into \( k^2/q \), it is best to take \( k = \sqrt{\ell} \), which implies \( \ell = 2 \). In other words, this idea has its niche only considering the worst-case time, where for a small enough \( k \) both the time and the space of the standard block-based Sparse Table solution are improved. **BbST with no input array contraction** The simple solution presented in Section 2.1 due to a very fast construction, seems to be suitable also for the offline RMQ problem. This variant greatly simplifies the procedure described in Section 3.2 as now there is no need to sort the queries. Basically, we reduce the previous variant to the last two stages. Naturally, this comes at a price: the extra space usage becomes \( O((n/k) \log n/k) \) words (yet the optimal choice of \( k \) may be different, closer to \( \sqrt{n} \)), but query times often remain very competitive. Let us focus on the space and time complexities for this variant, for both the worst and the average case. The analysis resembles the one for the variant with the contracting of \( A \). We have two parameters, \( n \) and \( k \), and two stages of the algorithm. The former stage takes \( O(n + (n/k) \log(n/k)) \) time, the latter takes \( O(qk) \) time in the worst case and \( O(q(1 + k^2/n)) \) on average (which is \( O(q) \) if \( k = O(\sqrt{n}) \)). In total we have \( O(n + (n/k) \log(n/k) + qk) \) time in the worst case and \( O(n + (n/k) \log(n/k) + q) \) time on average, provided in the latter case that \( k = O(\sqrt{n}) \). The space, expressed in words, is \( O((n/k) \log(n/k)) \). To minimize both the time and the space for the average case we set \( k = \Theta(\sqrt{n}) \). Then the average time becomes \( O(n + \sqrt{n} \log \sqrt{n} + q) = O(n + q) \) and the space is \( O(\sqrt{n} \log n) \). ### 4 Experimental results For the experiments, we use the array \( A \) storing random 32-bit unsigned integers. The queries are pairs of the form \((\ell_i, r_i)\), where \( \ell_i \) is drawn uniformly random from the whole sequence and \( r_i - \ell_i \) is between 0 and a specified range width limit. Our algorithms were implemented in C++ and compiled with 32-bit gcc 7.2.0 with `-O3 -mavx2` switches. The source codes can be downloaded from [https://github.com/kowallus/BbST](https://github.com/kowallus/BbST). The experiments were conducted on a desktop PC equipped with a 4-core Intel i7 4790 3.6 GHz CPU and 32 GB of 1600 MHz DDR3 RAM (9-9-9-24), running Windows 10 Professional. All presented timings in all tests are medians of 7 runs, with cache flushes in between. We start with the experiments in the traditional, offline, RMQ scenario. Fig. 4 presents average query times in function of growing maximum query range size. We use the following algorithms in the comparison: - SDSL-SCT and SDSL-SADA, two RMQ implementations from the well-known SDSL library \[9\] (https://github.com/simongog/sdsl-lite), - BP (Balanced Parentheses) algorithm by Ferrada and Navarro \[5\] (https://github.com/hferrada/rmq.git), - SDSL-BP and SDSL-REC, two algorithms by Baumstark et al. \[3\] (https://github.com/kittobi1992/rmq-experiments), - BbST, our baseline solution, with block size of \(k = 512\), - BbST2, our two-level solution, with block sizes \((k_1, k_2)\) set to \((512, 64)\) or \((4096, 256)\), respectively. The numbers in parentheses give the space usage in bits. Note that our algorithms require access to array \(A\), which results in the overhead of \(32n\) bits. The input size \(n\) is 100 million in the left figures and 1 billion in the right ones. The top figures present all lines while in the bottom ones we focus on the faster ones (note the time scale). The line with vertical markers stands for the classical Sparse Table solution; it shows... the performance of this very simple data structure, which is unbeatable for relatively narrow queries (of width up to a few thousands), yet the required space is around 320–400n bits. We can see that our idea of using blocks not only reduces the ST space by an order of magnitude, but also speeds up queries on wide intervals. SDSL-REC is the most succinct solution, BP is a close second in space, yet not so fast as SDSL-BP and SDSL-REC. Of these two, SDSL-REC seems to be the method of choice, even if not always faster than SDSL-BP. The performance of our variants usually improves with growing range widths, which is not the case of the competitors. The two-level variant, BbST2, is more succinct and also usually faster than BbST. We note that BbST2 is, roughly speaking, about twice faster than SDSL-REC with range widths up to a few thousands (the gap is greater than 2-fold for n = 100M and smaller for n = 1G), yet grows to an order of magnitude for wide queries. In Fig. 5 we estimate the worst-case, rather than average, query time of our algorithms. In this experiment, for each query we scan the two blocks to which its boundaries belong (no matter if this scan were really needed) and the averages over such times are presented. Note that a ‘direct’ measurement of the worst case, that is, taking the maximum time over many queries, is hard to perform reliably, as the times are below 1µs. As expected, in this comparison our algorithms are not really competitive, except for narrow ranges (maximum width of 30 in the test). Yet, for much wider ranges BbST variants are inferior in speed only to SDSL-REC and SDSL-BP. Interestingly, the query times of BP grow roughly linearly in the logarithm of the range width; for the other tested algorithms the timings stabilize. For the experiments to follow in most cases we present the results only for n = 1G, to save space (in the case of n = 100M the trends are similar). Our next attempt was to combine the block-based sparse table with SDSL-REC, in order to get rid of the input array during the query handling. The variants with letter ‘x’ in their names, shown in Fig. 6, are not yet hybrids; they do not answer RMQ in all cases. They simply get rid of array A and are thus unable to scan over an interval. If the precomputed minima are not enough to answer a given query, the algorithm Figure 6. The query success rate, i.e., how often a random query can be handled by our data structure without accessing the input array $A$. For each maximum width 1M queries were used. The space usage, in bits, for particular solutions is given in the legends, in parentheses. Note that these solutions are not full RMQ-answering algorithms. (c)BbSTx (resp. (c)BbST2x) is unable to given an answer. The query success rate tells how often the query can be handled. Note that now the space is much reduced. The left figure presents variants based on the standard $\text{BbST}(2)$, while the right one shows their compact versions, with prefix ‘c’ in their names. As expected, the compact variants require less space, but their query success rates overlap with the values for the corresponding non-compact variants. For the hybrids involving $\text{cBbST}2$, we used the following formula for quantizing the block minimum values for the blocks of size $k_2$: $$\left\lfloor \frac{\maxQ \times (1 - (\maxMin - v)^8/(\maxMin - \minMin)^8)}{\maxMin - \minMin} \right\rfloor,$$ where $v$ is a block minimum value, and $\maxMin$ (resp. $\minMin$) is the largest (resp. smallest) minimum among the minima for blocks of size $k_2$. The formula was found experimentally. As the compact variants are recommended both for speed and space frugality, we combined them with SDSL-REC variants into hybrids (Fig. 7). We can see that for wide intervals our hybrids are faster than SDSL-REC by more than an order of magnitude, while for narrow ones (up to a few hundred in width) the gap is quite narrow. Yet, the more successful of our variants, the hybrid with block sizes of 16384 and 256, respectively, is defeated in speed (by about 10%) only for the narrowest interval. Fortunately, the same variant is more compact of our two, with 2.24$n$ bits of space, which is not much more than 2.16$n$ bits of SDSL-REC. An important facet of every data structure is its construction time. Table 1 presents the construction times (and space usage) for several RMQ algorithms or their configurations, for the input array of size $n = 1G$. We can see that the plain $\text{BbST}$ is clearly the fastest, about 40 times faster than the fastest solution with constant worst case time queries, SDSL-SCT. Note also that in the construction time for SDSL-SCT over 1G elements we can build $\text{BbST}$ and answer from about 100M to 400M queries. Our two-level variant, $\text{BbST}2$, is still very fast in construction. The hybrids, however, must require more time to build than SDSL-REC, which is their Figure 7. Average query times for ranges of varying maximum width (uniformly random from 1 to the given value) and two sizes of the input sequence (100M and 1G). Our hybrid, cBBST2-SDSL-REC, in two parameter configurations, is compared against the fastest non-hybrid solution from the literature, SDSL-REC. For each maximum width 1M queries were used. The space usage, in bits, for particular solutions is given in the legends, in parentheses. ### Table 1. Construction times and space usage for several RMQ algorithms, for \( n = 1G \). All implementations are single-threaded. <table> <thead> <tr> <th>variant</th> <th>build time</th> <th>size / ( n )</th> </tr> </thead> <tbody> <tr> <td>SDSL-SADA</td> <td>212.5</td> <td>5.85</td> </tr> <tr> <td>SDSL-SCT</td> <td>23.9</td> <td>2.54</td> </tr> <tr> <td>BP</td> <td>66.6</td> <td>2.21</td> </tr> <tr> <td>SDSL-BP</td> <td>26.0</td> <td>2.55</td> </tr> <tr> <td>SDSL-REC</td> <td>62.6</td> <td>2.16</td> </tr> <tr> <td>ST</td> <td>436.6</td> <td>404.94</td> </tr> <tr> <td>BbST, ( k = 512 )</td> <td>0.6</td> <td>34.63</td> </tr> <tr> <td>BbST2, ( k_1 = 512, k_2 = 64 )</td> <td>2.7</td> <td>34.75</td> </tr> <tr> <td>BbST2, ( k_1 = 4096, k_2 = 256 )</td> <td>2.8</td> <td>32.31</td> </tr> <tr> <td>cBBST-SDSL-REC, ( k = 512 )</td> <td>66.0</td> <td>2.82</td> </tr> <tr> <td>cBBST2-SDSL-REC (512, 64)</td> <td>67.6</td> <td>3.07</td> </tr> <tr> <td>cBBST2-SDSL-REC (16384, 256)</td> <td>67.6</td> <td>2.24</td> </tr> </tbody> </table> The last experiments concerned the offline RMQ scenario. Here, a batch of \( q \) queries is handled, where \( q \ll n \). The comparison comprises the following algorithms: Faster range minimum queries Table 2. Space usage for individual data structure components. All numbers are in bits per element. <table> <thead> <tr> <th>variant</th> <th>backend</th> <th>sparse table</th> <th>second level</th> </tr> </thead> <tbody> <tr> <td>BbST, (k = 512)</td> <td>32</td> <td>2.63</td> <td></td> </tr> <tr> <td>BbST2, (k_1 = 512, k_2 = 64)</td> <td>32</td> <td>2.63</td> <td>0.13</td> </tr> <tr> <td>BbST2, (k_1 = 4096, k_2 = 256)</td> <td>32</td> <td>0.28</td> <td>0.03</td> </tr> <tr> <td>BbST-SDSL-REC, (k = 512)</td> <td>2.16</td> <td>2.63</td> <td></td> </tr> <tr> <td>BbST2-SDSL-REC (512, 64)</td> <td>2.16</td> <td>0.28</td> <td>0.16</td> </tr> <tr> <td>BbST2-SDSL-REC (4096, 256)</td> <td>2.16</td> <td>0.66</td> <td>0.25</td> </tr> <tr> <td>cBbST-SDSL-REC (512, 64)</td> <td>2.16</td> <td>0.66</td> <td>0.25</td> </tr> <tr> <td>cBbST2-SDSL-REC (16384, 256)</td> <td>2.16</td> <td>0.01</td> <td>0.06</td> </tr> </tbody> </table> Table 3. Cumulative percentages of the execution times for the successive stages of BbST\textsubscript{CON} with the fastest serial sort (kxsort). The default value of \(k\) (512) was used. Each row stands for a different number of queries (given in thousands). <table> <thead> <tr> <th>(q) (in 1000s)</th> <th>stage 1</th> <th>stages 1–2</th> <th>stages 1–3</th> <th>stages 1–4</th> </tr> </thead> <tbody> <tr> <td>(n = 100M)</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>10</td> <td>1.4</td> <td>95.9</td> <td>95.9</td> <td>100.0</td> </tr> <tr> <td>320</td> <td>23.5</td> <td>92.5</td> <td>93.0</td> <td>100.0</td> </tr> <tr> <td>10240</td> <td>65.8</td> <td>88.3</td> <td>89.1</td> <td>100.0</td> </tr> <tr> <td>(n = 1G)</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>32</td> <td>0.4</td> <td>99.6</td> <td>99.6</td> <td>100.0</td> </tr> <tr> <td>1024</td> <td>13.8</td> <td>96.5</td> <td>96.8</td> <td>100.0</td> </tr> <tr> <td>32768</td> <td>59.0</td> <td>87.9</td> <td>88.6</td> <td>100.0</td> </tr> </tbody> </table> − BbST\textsubscript{CON}, a version of our block-based sparse table with contracted input array, with block size of \(k = 512\), − BbST and BbST2, our algorithms used in the previous experiments, with block sizes set to \(k = 512\) (BbST) and \((k_1, k_2) = (4096, 256)\) (BbST2). We can see (Fig. 8) that the relative advantage of our variants over ST-RMQ\textsubscript{CON} grows with the number of queries. In any case, our algorithm is several times faster than its predecessor. For small enough \(q\) (the left figures), BbST\textsubscript{CON} dominates over BbST, while for a larger number of queries BbST takes the lead. In almost all cases, our two most successful variants are several times faster than ST-RMQ\textsubscript{CON}, sometimes (BbST, relatively large \(q\)) reaching an order of magnitude gap in performance. Table 3 contains some profiling data. Namely, cumulative percentages of the execution times for the four successive stages (cf. Section 3.2) of BbST\textsubscript{CON} with default settings, are shown. Unsurprisingly, for a growing number of queries the relative impact of the sorting stage (labeled as stage 1) grows, otherwise the array contraction (stage 2) is dominating. The last two stages are always of minor importance in these tests. Different sorts for BbST\textsubscript{CON}, in a serial regime, were applied in the experiment shown in Fig. 9. Namely, we tried out C++’s qsort and std::sort, kxsort, gnu::parallel::sort and Intel parallel stable sort (pss). The function qsort, as it is easy to guess, is based on quick sort. The other sort from the C++ standard library, std::sort, implements introsort, which is a hybrid of quick sort and heap sort. Its idea Figure 8. Running times for with varying number of queries \( q \), from \( \sqrt{n} \) to \( 32\sqrt{n} \) (left figures) and from \( 64\sqrt{n} \) to \( 1024\sqrt{n} \) (right figures), where \( n = 1G \). The symbol \( m \) denotes a query width. In the top figures the maximum query width is 32K, while in the bottom ones it is 1G. is to run quick sort and only if it gets into trouble on some pathological data (which is detected when the recursion stack exceeds some threshold), switch to heap sort. In this way, std::sort works in \( O(n \log n) \) time in the worst case. The next contender, kxsort, is an efficient MSD radix sort. The last two sorters are parallel algorithms, but for this test they are run with a single thread. The gnu sort is a multiway mergesort (exact variant) from the GNU libstdc++ parallel mode library. Finally, Intel’s pss is a parallel merge sort. We use it in the OpenMP 3.0 version. For the last experiment with BbST\textsubscript{CON}, we ran our algorithm in a parallel mode, varying the number of threads in \( \{1, 2, \ldots, 8, 12, 16\} \) (Fig 10). For sorting the queries we took the faster parallel sort, _gnu_parallel::sort. The remaining stages also benefit from parallelism. The second stage computes in parallel the minima in contiguous areas of \( A \) and the third stage correspondingly handles blocks of \( A_Q \). Finally, answering queries is handled in an embarassingly parallel manner. As expected, the performance improves up to 8 threads (as the test machine has 4 cores and 8 hardware threads), but the overall speedups compared to the serial variant are rather disappointing, around factor 2 or slightly more. Faster range minimum queries Figure 9. Impact of the sort algorithm on the running times of $\text{BbST}_\text{CON}$. The number of queries $q$ varies from $\sqrt{n}$ to $32\sqrt{n}$ (left figures) and from $64\sqrt{n}$ to $1024\sqrt{n}$ (right figures), where $n$ is $1\text{G}$. Figure 10. Impact of the number of threads in _gnu_parallel::sort and in creating $A_Q$ (by independent scanning for minima in contiguous areas of $A$) on the overall performance of $\text{BbST}_\text{CON}$, for different number of queries $q$, where $n$ is $100\text{M}$ (left figure) or $1\text{G}$ (right figure). Note the logarithmic scale on the Y-axis. Table 4 presents the memory use (apart from input array $A$ and the set of queries $Q$) for our variants. $\text{BbST}$ is insensitive here to $q$. The parameter $k$ was set to 512 in the case of $\text{BbST}_\text{CON}$. As expected, the space for $\text{BbST}_\text{CON}$ grows linearly with $q$. For small enough $q$, $\text{BbST}_\text{CON}$ is more succinct than $\text{BbST}$ (unless we run it with large $k$, which hampers the speed), but for the maximum tested number of queries, $q \approx 1024\sqrt{n}$, $\text{BbST}$ easily wins in this respect. Finally, $\text{BbST}_2$ may pose a better time-space tradeoff than $\text{BbST}$. 5 Final remarks Computing range minimum queries over a sequence of length $n$ is a fundamental primitive in many compressed data structures (indexes) and string mining. We proposed a very simple, yet efficient approach to this problem, called $\text{BbST}$, adapting the well-known Sparse Table technique to work on blocks, with speculative block minima. Table 4. Memory use for the three variants, as the percentage of the space occupied by the input array $A$ (which is $4n$ bytes). The parameter $k$ was set to 512 for BbST$_{\text{CON}}$. <table> <thead> <tr> <th>variant with parameter</th> <th>extra space as $%$ of the input $n = 100,000,000$</th> <th>extra space as $%$ of the input $n = 1,000,000,000$</th> </tr> </thead> <tbody> <tr> <td>BbST$_{\text{CON}}, q \approx \sqrt{n}$</td> <td>0.10</td> <td>0.03</td> </tr> <tr> <td>BbST$_{\text{CON}}, q \approx 32\sqrt{n}$</td> <td>3.23</td> <td>1.03</td> </tr> <tr> <td>BbST$_{\text{CON}}, q \approx 1024\sqrt{n}$</td> <td>103.68</td> <td>33.20</td> </tr> <tr> <td>BbST, $k = 512$</td> <td>7.03</td> <td>8.20</td> </tr> <tr> <td>BbST, $k = 1024$</td> <td>3.32</td> <td>3.91</td> </tr> <tr> <td>BbST, $k = 2048$</td> <td>1.56</td> <td>1.86</td> </tr> <tr> <td>BbST, $k = 4096$</td> <td>0.73</td> <td>0.88</td> </tr> <tr> <td>BbST, $k = 8192$</td> <td>0.34</td> <td>0.42</td> </tr> <tr> <td>BbST, $k = 16,384$</td> <td>0.16</td> <td>0.20</td> </tr> <tr> <td>BbST, $k = 32,768$</td> <td>0.07</td> <td>0.09</td> </tr> <tr> <td>BbST$_2 (512,64)$</td> <td>7.42</td> <td>8.59</td> </tr> <tr> <td>BbST$_2 (4096,256)$</td> <td>0.83</td> <td>0.98</td> </tr> <tr> <td>BbST$_2 (16384,256)$</td> <td>0.26</td> <td>0.29</td> </tr> </tbody> </table> This technique alone allows to be competitive in speed to existing RMQ solutions, but has two drawbacks: it allows to obtain constant query time only in the average case (assuming that the range width is large enough to the block size, specified at construction time) and it requires access to the input array during the query handling. As a next solution we thus proposed to combine our technique with one of the existing succinct solutions with $O(1)$ worst-case time queries and no access to the input array. The resulting hybrid is still a memory frugal data structure, spending usually up to about $3n$ bits, and providing competitive query times, especially for wide ranges. We also showed how to make our baseline data structure more compact. Additionally, we showed how to use the BbST approach (in a standard or modified way) in the recently proposed scenario of offline RMQ. In this problem, the set of $q$ queries is given beforehand and if $q$ is small enough, it is not recommended to build a (heavy) data structure on the input array as this overhead may not be compensated. Experimental results show that the block-based Sparse Table approach can be recommended also for the offline RMQ problem. Not surprisingly, parallelization allowed to obtain extra speedups. Acknowledgement The work was supported by the Polish National Science Centre under the project DEC-2013/09/B/ST6/03117 (both authors). References
{"Source-Url": "https://export.arxiv.org/pdf/1711.10385", "len_cl100k_base": 13913, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 72878, "total-output-tokens": 16130, "length": "2e13", "weborganizer": {"__label__adult": 0.0004286766052246094, "__label__art_design": 0.0004949569702148438, "__label__crime_law": 0.0005450248718261719, "__label__education_jobs": 0.0010995864868164062, "__label__entertainment": 0.00013077259063720703, "__label__fashion_beauty": 0.000247955322265625, "__label__finance_business": 0.00038552284240722656, "__label__food_dining": 0.0004718303680419922, "__label__games": 0.0010805130004882812, "__label__hardware": 0.0019245147705078125, "__label__health": 0.0008006095886230469, "__label__history": 0.0005731582641601562, "__label__home_hobbies": 0.00019729137420654297, "__label__industrial": 0.0007777214050292969, "__label__literature": 0.00038361549377441406, "__label__politics": 0.0003769397735595703, "__label__religion": 0.0007257461547851562, "__label__science_tech": 0.23974609375, "__label__social_life": 0.00013709068298339844, "__label__software": 0.01529693603515625, "__label__software_dev": 0.73291015625, "__label__sports_fitness": 0.0003724098205566406, "__label__transportation": 0.0007605552673339844, "__label__travel": 0.0003006458282470703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52728, 0.05733]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52728, 0.42326]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52728, 0.8745]], "google_gemma-3-12b-it_contains_pii": [[0, 3481, false], [3481, 7339, null], [7339, 10739, null], [10739, 13321, null], [13321, 15862, null], [15862, 19628, null], [19628, 23136, null], [23136, 25447, null], [25447, 28403, null], [28403, 32054, null], [32054, 33249, null], [33249, 35603, null], [35603, 38184, null], [38184, 39867, null], [39867, 43383, null], [43383, 45177, null], [45177, 46817, null], [46817, 50029, null], [50029, 52728, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3481, true], [3481, 7339, null], [7339, 10739, null], [10739, 13321, null], [13321, 15862, null], [15862, 19628, null], [19628, 23136, null], [23136, 25447, null], [25447, 28403, null], [28403, 32054, null], [32054, 33249, null], [33249, 35603, null], [35603, 38184, null], [38184, 39867, null], [39867, 43383, null], [43383, 45177, null], [45177, 46817, null], [46817, 50029, null], [50029, 52728, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52728, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52728, null]], "pdf_page_numbers": [[0, 3481, 1], [3481, 7339, 2], [7339, 10739, 3], [10739, 13321, 4], [13321, 15862, 5], [15862, 19628, 6], [19628, 23136, 7], [23136, 25447, 8], [25447, 28403, 9], [28403, 32054, 10], [32054, 33249, 11], [33249, 35603, 12], [35603, 38184, 13], [38184, 39867, 14], [39867, 43383, 15], [43383, 45177, 16], [45177, 46817, 17], [46817, 50029, 18], [50029, 52728, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52728, 0.20763]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
232dfe61d88f9c62ad84046c38500a16ac91d781
Seminar-Report 9412 on Active Database System Alex Buchmann Technische Hochschule Darmstadt Sharma Chakravarthy University of Florida Klaus Dittrich Universität Zürich March 21–25, 1994, Schloß Dagstuhl List of participants: Elena Baralis, Politecnico di Torino, Italy Daniel Barbará, Matsushita Information Technology Lab., Princeton, USA Renato Barrera, Intergraph Corporation, Huntsville, USA Rudolf Bayer, TU München, Germany Mikael Berndtsson, U of Skövde, Sweden Mokrane Bouzeghoub, U de Versailles, France Holger Branding, TH Darmstadt, Germany Alex Buchmann, TH Darmstadt, Germany Stefano Ceri, Politecnico di Milano, Italy Sharma Chakravarthy, U of Florida, Gainesville, USA Umeshwar Dayal, HP Labs, Palo Alto, USA Oscar Diaz, U of the Basque Country, San Sebastian, Spain Klaus R. Dittrich, U Zürich, Switzerland Suzanne Embury, U of Aberdeen – King’s College, Great Britain Opher Etzion, Technion – Israel Institute of Technology, Haifa, Israel Johann Christoph Freytag, Humboldt U zu Berlin, Germany Stella Gatziu, U Zürich, Switzerland Michael Gertz, U Hannover, Germany Peter Gray, U of Aberdeen – King’s College, Great Britain Ulrike Griefahn, U Bonn, Germany Thomas Grotehen, U Zürich, Switzerland Ulrike Jaeger, HU Berlin, Germany Heinrich Jasper, U Oldenburg, Germany Dirk Jonscher, U Zürich, Switzerland Martin Kersten, CWI, Amsterdam, The Netherlands Angelika Kotz-Dittrich, Schweizerische Bankgesellschaft Zürich, Switzerland Thomas Kudraß, TH Darmstadt, Germany Brian Lings, U of Exeter, Great Britain Rainer Manthey, U Bonn Germany Hans Nissen, RWTH Aachen, Germany M. Tamer Özsu, U of Alberta, Canada Norman Paton, Heriot–Watt U, Edinburgh, Great Britain Tore Risch, Linköping U, Sweden Claudia Roncancio, BULL–IMAG / Grenoble U, Gieres, France Gunter Saake, TU Braunschweig, Germany Scarlet Schwiderski, Cambridge U, Great Britain Timos Sellis, Nat. Techn. U of Athens, Greece Eric Simon, INRIA, Le Chesnay Cedex, France Martin Sköld, Linköping U, Sweden Salvatore Stolfo, Columbia U, New York, USA Jean–Raymond Velten, Centre d’Etudes de la Navigation Aerienne, Toulouse, France Jennifer Widom, Stanford U, USA Jürgen Zimmermann, TH Darmstadt, Germany Günter von Bultzingsloewen, FZI Karlsruhe, Germany Contents Elena Baralis An Algebraic Approach to Rule Analysis in Active Database Systems Daniel Barbar Dynamic Workflows in Distributed, Autonomous Systems Renato Barrera Event Signaling in an OO System Mikael Berndtsson ER1C: A study in Events and Rules as 1st Class Holger Branding Active DBS and RTDBS: bringing both camps together Alex Buchmann Some requirements to make active databases a viable technology Stefano Ceri On active databases: issues and evolution Sharma Chakravarthy SENTINEL: An object-oriented active DBMS Umeshwar Dayal Active Databases: A Perspective Oscar Diaz Diversity of rule management: The issue and an approach for object-oriented systems Klaus R. Dittrich Active Rules in the Context of Access Control Object Model Extensions for Active Design Suzanne Embury A Constraint-Based Approach to the Repair of Long-Duration Design Transactions. Opher Etzion The Pardes Project Johann Christoph Freytag Optimization Issues in ADBs - The first steps Stella Gatziu The Architecture of the SAMOS Prototype Michael Gertz Reactive Integrity Enforcement in Active Databases Peter Gray A Constraint-Based Approach to the Repair of Long-Duration Design Transactions Ulrike Griefahn Phoenix: a database programming language with active rules Thomas Grötchen Object Model Extensions for Active Design Ulrike Jaeger The AIMS Approach to Active Information Management Systems Heinrich Jasper Modelling Events and Reactions for External Active Database Applications Dirk Jonscher Active Rules in the Context of Access Control Angelika Kotz-Dittrich Active Database Functionality - Experiences and Requirements in a real-world Environment Thomas Kudraß Active mediator systems in a heterogenous environment Brian Lings ER1C: A study in Events and Rules as 1st Class Rainer Manthey Active DBs and reactive programming: How much of active DB research is really DB-specific? Hans Nissen Active Rules in a Deductive and Object-Oriented Setting M. Tamer Özsu Active Capabilities in the TIGUKAT objectbase Management System Norman Paton On Applications and Active Database Research Tore Risch Compiling Active Object-Relational Rule Conditions into Partially Differentiated Relations Claudia Roncancio An active object oriented DBPL Gunter Saake Abstract Description of Object Interaction and Object Activity Scarlet Schwiderski Composite Events in Distributed Systems Timos Sellis Concurrency Control Issues in Active Database Systems Eric Simon Optimizing Incremental Evaluation of Rules in an Active DBMS Martin Sköld Compiling Active Object-Relational Rule Conditions into Partially Differentiated Relations Salvatore Stolfo Scaling Database Rule Processing Jean–Raymond Velten Active DBMS for Air Traffic Control Systems? Günter von Bültzingsloewen Active DBMS Support for Workflows and Derivation Processes Jennifer Widom Research in Active Database Systems Jürgen Zimmermann Designing an Active Database System "On active databases: issues and evolution" Stefano Ceri, Politecnico di Milano Active databases are vehicles for expressing data-driven reactive computations which are much needed by real-life applications; yet, they are still not adequately supported by commercial systems, while research prototypes are not sufficiently uniform and/or consolidated. The above considerations, more or less agreed in the opening session of the workshop, ”trigger” my current perception of hot issues in the field and of how it could evolve: 1. I believe we need to provide a better abstraction mechanism for reactive computation, and to integrate it into the application development life cycle. This requires understanding critical design issues and steps, providing methodological guidance, helping critical design steps by means of tools. This is useful, in my opinion, even if the target implementation does not use active dbms technology. 2. We need to facilitate the process of rising the functionalities being supported by the active components of commercial systems, which are currently very limited, by having them incorporate some of the innovations currently being experienced in research prototypes. As researchers in the field, it is our responsibility to identify such critical functionalities, provide them with a clean and uniform semantics, and experience them through concrete implementation and use. "Diversity of rule management: The issue and an approach for object-oriented systems" Oscar Diaz, University of the Basque Country Active DBMSs are a fast-growing area of research mainly due to the large number of applications which can benefit from this active dimension. These applications are far from being homogeneous, requiring different kinds of functionalities. However, most of the active DBMSs provide a fixed, hard-wired execution model to support the active dimension. So my contentions are: 1) different applications require distinct execution models for rule management, 2) it is the user who, besides defining the rules, should specify how these rules have to be managed, and 3) control information rarely refers to individual rules. Rather it is shared by a set of rules supporting the same functionality. Hence we aim to provide an execution model mechanism which exhibits heterogeneity, extensibility and declarativeness. In an object-oriented context, the above features can partially be achieved through metaclasses. Metaclasses allow the advantages of the object-oriented paradigm to be applied to the DBMS itself. The execution model is then defined using classes and methods. Thus, not only are rules (i.e. the knowledge model) described using an OO approach but so is the rule management strategy (i.e. the execution model). Heterogeneity and extensibility can be seen as the main advantages obtained from this approach. To enhance declarativeness the execution model is described through a set of parameters. In this way, the user just provides the parameters for each dimension that best fit the application and the system generates the appropriate methods to support this execution model. These are the problems we face in our active system EXACT. "Active DBs and reactive programming: How much of active DB research is really DB-specific?" Rainer Manthey, University of Bonn, Germany When looking at topics and issues currently under investigation in the active DB community, it appears to me that many (if not most) of them are not that strongly depending on “typical” DB characteristics, such as the presence of a large amount of persistent data, managed in a centralized manner and shared by many users. Features like event specification and detection, coupling modes or execution models, not to mention semantics, are more or less orthogonal to the above-mentioned characteristics. They are much more related to the problem of organizing a general-purpose programming paradigm, which I would like to call “reactive programming”. It is characterized by the existence of two computational processes linked by means of a set of event-reaction specifications. A "foreground" process, specified in terms of a "normal" imperative program, is automatically monitored by an extended run-time system for the occurrence of situations corresponding to a relevant event. Reactions are automatically activated once such an event has been observed, resulting in a "background" process that can be viewed as a merge of the foreground process and the reactions triggered by its individual steps. Thus reactive programming adds a layer of implicit activation of imperative procedures (controlled by an "intelligent" system) to the explicit activation ("call") of procedures under the control of a programmer. I believe that understanding interaction with an active DB in terms of such a very general model of computation will help us avoiding to develop methods and techniques that are too closely depending on a (narrow) DB-focussed point of view. This does not mean that active DB research should move to programming language research, though. Once the general-purpose aspect of reactive processing has been understood, it will be much easier to properly identify and to successfully highlight those problems and solutions, that are really specific to and unique within the DB context we are interested in. "Scaling Database Rule Processing" Salvatore J. Stolfo, Columbia University As large organizations inevitably embrace distributed database technologies, and the promised age of data superhighways becomes a reality, databases will grow to enormous sizes (and breadth of topics) and will be available anywhere and anytime to a very large user community. Judging from the interest in active / expert / deductive databases, it is natural to pose the question of whether or not the current approaches to database rule processing scale to databases of sizes that are orders of magnitude larger than common today. We call this the scaling problem. We posit that solving the scaling problem requires new approaches to parallel and distributed processing of database rule programs. PARADISER (PARallel And DIStributed Environment for Rules) is an operational programming environment that compiles and implements a rule program over a database substrate in a distributed computing system. PARULEL (PArallel RULE Language), the target rule language, is a set-oriented rule language with parallel execution semantics. An initial implementation of PARADISER, including a PARULEL compiler, has been demonstrated and its performance evaluated over a test suite of application programs, PARADISER introduces a new form of optimization called Predictive Dynamic Load Balancing (PDLB) and a new parallel join algorithm that balances the workload of rule evaluation over a number of processing sites, thereby increasing utilization and speed up of parallel resources. "Research in Active Database Systems" Jennifer Widom, Stanford University Earlier work in active database systems focused on: - Design and implementation of a new active rule language in the context of the Starburst extensible relational DBMS - Techniques for specifying so-called "internal" active database applications in high-level languages and compiling them to rules (integrity constraints, materialized views, deductive rules, etc.) - Techniques for statically analyzing the behavior of active rules. - Mechanisms for active rule processing in tightly-coupled distributed database systems Current work in active database systems is in the direction of: - Exploiting active rules for constraint management in loosely-coupled heterogeneous database systems - Better techniques for statically analyzing the behavior of active rules and techniques for statically and/or dynamically optimizing active rule execution - Semantic foundations for active rules "Active DBMS for Air Traffic Control Systems?" Jean-R Velten, CENA Toulouse, France The main responsibility of the Air Traffic Controller is to prevent collisions between the aircrafts he is in charge of while ensuring an orderly, but as efficient as possible, traffic flow. To provide such a service, he performs: * control tasks, issuing clearances and instructions to pilots; * monitoring tasks, surveying some critical traffic situations; * information tasks, delivering informations about other traffic to pilots. Within the DAARWIN project, CENA is studying a distributed architecture and new functions for the ATC system. In order to master the complexity of such a system and to ensure its easy evolution while hiding architecture choices, the Client-Server model has been chosen for the software architecture of this system, providing the basis for system decomposition and incremental design. Due to the very dynamic features of Air Traffic Control field, where most of the application programs need in fact to receive data in due time or to be notified of pertinent events, the active server approach has been choosen. So servers have to filter events and notify clients, in accordance with event subscriptions. The condition mechanism is used to refine the set of related data. The improvement of these systems will be based on the automation of high level functions while keeping the human operator in the ATC loop. The functions which are currently covered by research programmes are the monitoring functions as well as the cooperative tools, the objective of which is to assist controllers in their decision making process. All these advanced functions put a stronger emphasis on the active capabilities to be provided by the system kernel (mainly primitive events, multi-user notification of event, object status changes as events, appropriate transaction model ...) 7 "Active DBMS Support for Workflows and Derivation Processes" Günter von Bültzingsloewen, FZI Karlsruhe A typical scenario for many applications consists of a net of tools which are integrated into an environment and which cooperate to serve the application. Examples are CAD design environments, manufacturing environments or scientific information systems. The whole environment is supported by a component (potentially based on an active DBMS) managing meta-data (like design object hierarchies, versioning, etc.) and controlling the execution of activities (i.e. eventually the invocation of tools). This control component has to capture and ensure dynamic constraints on activity execution, i.e. which activities are enforced / allowed / forbidden to occur. Further requirements are: - Visualization of the flow of / constraints on activities - Abstraction by resting activities - Flexible online changes of flows/constraints - Automatic rederivation of a design or data product upon changes - Querying the derivation history and - Failure Handling. While transactional workflow management systems based on nested (trans-)actions and triggers support dynamic constraints, abstraction, and failure handling, they do not enable visualization of system status, etc. Therefore, we propose to combine them with high-level petri-nets (e.g. predicate / transition nets): the flow of (sub-)activities within each activity is captured by the net as well as the constraints on activity execution. The individual nets are related to each other in the way that a transition corresponds to the invocation of a subactivity which in turn corresponds to invoking the subactivity net by placing call parameters onto input places. The History can be captured by means of a trace net which records transition and token instances; this history is scanned for rederivation of data products and supports querying. Our next step will be to work out complex modelling examples in order to check which kinds of extensions to the basic petri net are required and to validate that the approach is appropriate from a user perspective. "A Constraint-Based Approach to the Repair of Long-Duration Design Transactions." Suzanne M. Embury & Peter M. D. Gray, University of Aberdeen, Scotland The transaction repair process involves the active use of semantic domain information (usually in the form of integrity constraints) to generate a sequence of updates that will restore validity to an invalid database state. In the context of traditional, high-volume transaction-processing applications, the aim of the repair process is to recover from errors automatically so that transaction processing can continue with the minimum of intervention from the user. In the context of design applications, on the other hand, and in particular in applications where transactions are "user-controlled", the aim is to assist the designer/user in building a consistent design by suggesting a set of possible updates for she/he to select from. In this way, the transaction repair mechanism becomes another tool in the design process, which can lead but does not constrain the designer. Our approach is to view the transaction repair process as a constrained search problem in which the result of the search is a (set of) sequences of updates that will restore the consistency of an invalid database state. Based on this view, we are currently implementing an extension to our transaction mechanism which responds to constraint violation (detected at commit-time) by generating a piece of code describing the repair problem, and passing this to a special-purpose constraint solver (CHIP) for manipulation. The constraint solver is able to retrieve data selectively from the DBMS, as it is required, or to evaluate database methods or to retrieve extra metadata information in order to guide the search process. ”Active Rules in a Deductive and Object-Oriented Setting” Hans Nissen, RWTH Aachen, Germany Event-Condition-Action (ECA) Rules are a well-known mechanism to define the active (reactive) behavior of a system. Active databases are able to handle and evaluate such rules in an efficient way. For our purposes we look at a system as a provider of several kinds of services. Such a system can be an application tool like a text editor, or a database system. We use active rules to describe the behavior of a system, i.e. a rule specifying how the system should react if a specified event was detected. We propose a service oriented model of such rules, where the action is the execution of some services while an event can be seen as an already executed service. In this context we propose to view the evaluation/interpretation of an active rule again as a service offered by the knowledge base. We integrated this model into the knowledge-representation language Telos. Currently we have two applications in mind. In the first place we want to use active rules to specify the internal behavior of our KBMS itself, e.g. how it should compile and evaluate integrity constraints. Our long-term goal is to use active rules to implement extended functionality of our system, including such things as view maintenance, view update, constraint repair and partial evaluation of meta formulas. In the second place we want to describe the way-of-working of application programs, such as tools of our usage environment. They then become servers, where the KB controls the execution of the services. To overcome the problem that they become a bottleneck, we want to compile the behavior-specification into code (e.g. C++), which can be linked to the application tool and then controls it’s behavior. The change of the way-of-working can easily be done by changing the rules and recompiling the control code. No modification on the application code is needed. ”Dynamic Workflows in Distributed, Autonomous Systems” Daniel Barbar, M.I.T.L., Princeton N.S., USA Workflows are long-duration, multi-step activities that involve a coordinated execution of steps at multiple stations inside an organization. We are interested in workflows that can run in environments with the following requirements - Autonomous stations: each processing station is autonomous in executing a task on behalf of the workflow, that is, a processing station must be treated as a ”black-box” whose mode of operation cannot be changed or fully known to the workflow designer - Dynamic behavior: Processing stations evolve in time resulting to changes in the control flow. New rules can be added, changing the course of the workflow. This means that no workflow can be fully specified a priori. - Partial automation: the environment in which the workflow executes may only be partially automated. We are currently designing a model and system to manage workflow in such environments. The model is heavily based on rules that drive the flow of the activities. The environment poses very interesting problems to the design. For instance, event detection becomes complex because the events can be composite over primitive events on processing stations. "Concurrency Control Issues in Active Database Systems" Timos Sellis, Nat. Technical University of Athens, Athens, Greece Relational systems are extended with active capabilities. Languages have been developed and execution models have been investigated. Our presentation describes our research on the concurrent execution of rules in a database environment. Traditionally, the serializability criterion of correctness is defined on the basis of read/write conflicts. With rules however the conditions must be true for the actions to execute, and rules must fail when their actions are no longer true. A different correctness criterion is therefore needed based on the conflicts among conditions/actions. We have developed a locking protocol and the necessary extensions to the transaction managers for the above reasons. One extension is a new lock compatibility matrix which provides greater concurrent access. The second extension is to allow concurrent execution within a transaction. We also present a simulation-based performance study where we identify characteristic features of the rules and study their effect on performance. "On Applications And Active Database Research" Norman Paton, Herriot-Watt University, Edinburgh As illustrated by the wide range of topics addressed at this workshop, the extension of database systems with active capabilities is a fertile area for theoretical and practical research. It is also clear, however, that much of this research is proceeding on the basis of "technology push" rather than "application pull". In the early phase of research on a topic, this may well be healthy, as many ideas can be generated and paths explored in order to identify possibilities. However, many significant issues which are presently topics of debate can only be effectively addressed in the context of comprehensive example applications. For example, the following questions have been considered at this workshop: - Which parts of an application are most effectively expressed as active rules, and which using some other paradigm? - What tools or methodologies are most effective for designing applications involving active behaviour? - What forms of rule analysis are necessary/helpful for developers of active systems? - Which of the wide range of execution model dimensions proposal are useful in different contexts? - Are different kinds of database system most effectively served by different flavours of active rule system? It is difficult to see how such questions, which have a significant bearing on much of the ongoing research into active databases, can be effectively addressed without meaningful experience with significant applications. A preliminary framework for the comparison of rule systems... and applications is presented in [1]. This, however, is not sufficiently detailed or associated with a wide enough range of applications to enable most of the above questions to be answered. It is thus proposed that a portfolio of detailed descriptions of implemented applications would be a significant contribution to the active databases literature. It is planned, therefore, to put together such a portfolio under the auspices of the ACT-NET Network of the European Community. The plan is to proceed as follows: 1. ACT-NET members will draw up a framework within which active applications can be described. 2. A "call for description" will be made. 3. The resulting portfolio will be made available through the ACT-NET ftp server. It is intended that the portfolio will extend as new contributions are made. Please support this venture if you can! "Event Signaling in an OO system" Renato Barrera, Intergraph Corporation We are interested in providing active capabilities to an OO system that manages caches of C++ objects and uses a passive DBMS for persistency. Our system considers flat, short transactions only and is tailored for an environment in which events can be defined on the fly, and in which events can be associated to or dissociated from rules dynamically. Our system lumps conditions and actions into a single procedure, considers both primitive and composite events, and can execute rules in an immediate or in a deferred mode. In our treatment of composite events, we have traded expressive power for speed of execution, thus achieving a design in which both events and their ancilliary constructors are created "just in time" and cease to exist when no longer needed. "Active Database Functionality - Experiences and Requirements in a real-world Environment" Angelika Kotz-Dittrich, Union Bank of Switzerland In contrast to the increasingly sophisticated rule models the active database community is now coming up with, the state of the art in the industrial environment is fairly modest. We currently find a huge amount of rule checking embedded in application programs or hidden "in the heads of people". In our environment, which is a large bank, we made the observation that as soon as a DBMS with some active functionality is available to them, developers and users are very keen on applying these new features (we are talking here about the relational products with basic trigger mechanisms). The main problems and requirements we encountered in the corresponding projects are as follows: There is an urgent need for design methodologies and practical guidelines. Users demand tools that support maintenance, reuse and extension of rule sets (we find sets of more than 100 rules that are very hard to maintain). Performance is a big issue and there are applications exhibiting real-time requirements like e.g. in the area of security trading. Security problems (i.e. all aspects regarding validating and controlling rules) are a main hindrance to apply active DB functionality in mission-critical applications. Furthermore, we are living in a distributed heterogeneous environment with huge legacy systems and active functionality will have to cope with that fact. We are currently advocating the use of active DB functionality in non-mission-critical applications with moderate data volumes and workloads. Appropriate fields we are building databases for are financial analysis and economic research as well as infrastructural components for database interoperability or workflow control. “Optimization Issues in ADBs - The first steps” Johann Christoph Freytag, Humboldt Universität zu Berlin Based on already existing languages and concepts for ADBs we focus on the following three issues in our group. 1. Fundamental Issues in ADBs. Since ideas for ADBs use concepts coming from databases, the transaction area and "co-operate systems" area, it is important for us to understand the concepts in each area separately before "mixing" them together. The goal of this work is to develop a uniform framework that allows us to describe all aspects for ADBs coherently. 2. Optimization issues: Based on 1., we shall investigate the different possibilities in ADBs. Optimization should cover both inter- and intra-rule optimization. We hope to take advantage of already existing formalisms and models (such as Petri Nets) for dealing with this issue. 3. Applying ADB technology to applications Based on our interest in CIM we would like to model (part of) an application using the ADB paradigm. This task will help us to learn about the suitability of the ADB formalism, at the same time verifying the approach taken in 1. and 2.. A long way to go . . . . . ”Active Capabilities in the TIGUKAT objectbase Management System” M. Tamer Özsu, University of Alberta Many of the applications that require the functionality of an objectbase management system seem to require active capabilities as well. We are therefore, looking at the development of a distributed active objectbase management system. Our work is conducted within the context of the TIGUKAT system. TIGUKAT (which means "objects" in the Canadian Inuit (Eskimo) language) is a distributed objectbase management system under development in the Laboratory for Database Systems Research of the University of Alberta. It has a novel object model whose identifying characteristics include a purely behavioral semantics and a uniform approach to objects. Everything in the system (including the schema) is a first-class object with well-defined semantics; thus the system is reflective. The computational model supported is one of applying behaviors to objects. We are introducing rules as objects. Consistent with TIGUKAT philosophy, rule components (events, conditions and actions) are also modeled as objects. At the moment we have considered simple events based on behavior application. The conditions and actions are modeled as functions which are first-class objects. At the moment we consider conditions expressed as queries in the TIGUKAT query language (TQL). Query type is a subtype of function type in TIGUKAT. This work is in its initial stages and many of the system problems have yet to be solved. "The AIMS Approach to Active Information Management Systems" Ulrike Jaeger, Humboldt Universität zu Berlin, Germany AIMS approaches the problems of cooperative applications from different angles. An example for a cooperative application is a hospital information system. A set of self-centered units perform local tasks and cooperate for the sake of the global task to heal the patient. A unit is aware of its own task and context and the information it requires from other tasks in order to proceed. In exchange it will externalize part of its local information to other units. The AIMS model explores three dimensions of the problem: - a cooperation model, - an activity model, - a db model. The cooperation model follows a proposition of J. Klein (COMPCON 1991) and R. Obermarck who presented an event synchronization model for the specification and implementation of (advanced) transaction models. The model provides a slim bidirectional event interface to the units. The execution is event driven and handles complex events. It has temporal notions of eventually, as well as delay and deadline. The activity model in general follows the E-C-A semantics of rules. Here notions of event, complex event, situations come up again. The db model is characterized by tx as the execution model. AIMS investigates how far those concepts are identical, related or contradictory. We’d like to know which power we gain if we decide in favor of one or the other contradictory semantics? "Abstract Description of Object Interaction and Object Activity" Gunter Saake, TU Braunschweig Conceptual modelling of information systems requires besides the design of the database structure also the modelling of dynamic application behaviour. Part of this behaviour description may later on be implemented using active database technology. Such a modelling approach has to abstract from implementation details. It should offer a logic-based, declarative framework for describing all relevant aspects of system behaviour to enable consistency checks and application verification. As part of the conceptual modelling language TROLL, event calling is proposed to capture the interaction aspect of dynamic system behaviour. Event calling bases on a set-based execution model (a set of events occurs simultaneously) which is semantically based on logical implications between event occurrences. A set-at-a-time execution model avoids problems like confluence or non-deterministic semantics, leaves freedom for optimization, and the set property “no duplicates” makes it possible to check consistency and termination conditions for calling chains. Based on parameter flow, a partial order of called events may be determined defining correct execution orders. The presented work is joint work with Thorsten Hartmann. For receiving a detailed report on it, please contact the author. "Phoenix: a database programming language with active rules" Ulrike Griefahn, Universität Bonn, Germany Though most active database languages are based on the ECA-paradigm, there is still no agreed understanding of the role of the individual rule components and their relationships. When developing an active rule concept for our DBPL Phoenix, we therefore aimed at providing syntactical features for expressing various such roles and relationships within a common framework. Active rules in Phoenix consist of two parts, a trigger and a reaction. The trigger is composed of an event pattern and an event condition. The reaction consists of an arbitrary imperative statement of the DBPL and of an execution mode that specifies the time at which the reaction is to be executed. Conditions appear in two places: The event condition is only used to specify the triggering event in more detail. Conditions that generate bindings for the rule's action have to be specified within an imperative statement in the reaction part, e.g. for each C do A. This is possible, because control structures in Phoenix may include arbitrary queries in their condition part. Events that are observed in Phoenix are instance-oriented. An event may be an arbitrary procedure call, the exit of a procedure execution, or some other event (e.g. a clock event). Each event uniquely defines a specific point in time. In this sense the trigger part of a rule specifies a point in time at which the respective rule is to be activated. In a similar way the execution mode denotes the point in time at which the reaction execution is to be started. This may either be immediately after event detection, or instead or after the execution of the invoked procedure. In addition, the execution mode may specify a later event (e.g. 11:00am or the end of the current transaction), immediately after which the reaction is to be executed. "The Pardes Project" Opher Etzion, Technion, Israel The Pardes Project was originally aimed at supporting spreadsheet-type programming and constraint enforcement in an active fashion. These types of applications have inherent semantic properties that enable their specification in higher-level language, and execution optimization techniques that are not applicable in the general case of active databases such as: avoiding redundant updates, detecting cycles and incremental recomputations. Current activities in the Pardes project tackle the following issues 1. The "TAPUZ" extension add the capabilities of general active database rules and supporting composite events. 2. A temporal component of Pardes has been designed to integrate the capabilities of active and temporal databases and to allow multiple views of the past, present and future, and retroactive (+ proactive) updates and queries. 3. A mutual consistency theory of derived data-elements that combines the active database coupling modes with materialization modes. 4. An extension to the distributed case, and the use of Pardes as a coordinator for interdependencies in a federated database. 5. A high level language and model for active exception-handling in databases. "Composite Events in Distributed Systems" Scarlet Schwiderski, University of Cambridge In many application areas it will be unavoidable to use the active database paradigm in a distributed environment (e.g. banking systems). My particular interests include cooperative working and multimedia applications (e.g. multimedia conferencing, hypermedia). Distributing ECA rules to different sites of a distributed system brings considerable difficulties in comparison to "centralized" ECA rules. At present I am looking at composite events in a distributed system. I assume that the constituent basic events of a composite event occur at different sites in the system. A certain site is responsible for monitoring a certain rule and therefore for detecting the corresponding composite event. This site is called the observer site. On the one hand the observer site detects relevant events locally and processes them and/or sends them to remote "interested" sites; on the other hand it receives relevant events from remote sites. Events are first put into an event queue at the local rule monitor and then processed. One problem is that the arrival order of events at the local rule monitor does not coincide with their occurrence order (in general). Another issue to consider is that events on different sites can occur "in parallel" (no global time in distributed systems; approximately synchronized clocks) and therefore cannot be totally ordered. A local event monitor must therefore consist of two components: - a stabilizer, which sorts incoming events (topological sort) - an event detector, which detects composite events from the sorted stream. W.r.t. these problems I found an interesting analogy in the field of distributed debugging. I argue that some of these results apply to our case. On the whole, I want to tackle the following topics: - naming of basic events - construction of composite events - detection of composite events "ER1C: A study in Events and Rules as 1st Class." Brian Lings, University of Exeter Mikael Berndtsson, University of Skövde Presents the design of an active database system built in a layered architecture on Ontos. Major features include logical events with local condition checks, together with rule conditions which are SQL statements. ER1C is a '123' system according to the seminar definitions. Other important features include event and rule inheritance which is integrated with the inheritance concepts of C++ methods, and composite event detection. The implementation design relies heavily on the active data dictionary facilities of Ontos; this provides good support for primitive event detection. However, the transaction level gives poor support. "Optimizing Incremental Evaluation of Rules in an Active DBMS." Eric Simon, INRIA, 78153, Le Chesnay, France "Compiling Active Object-Relational Rule Conditions into Partially Differentiated Relations" Martin Sköld, Tore Risch Dept. Comp. and Inf. Science, Linköping University, Sweden Presents ongoing work on tightly integrating active rules with a next generation object oriented (OO) database system having transactions and a relationally complete OO query language. The rules are defined as Condition Action (CA) where Events (E) can be specified as an option. The condition part of a rule is defined as a declarative OO query and the action as procedural statements. For efficient rule condition monitoring, a technique called Partial Differentiation of relations which is based on incremental evaluation techniques. The rule compiler generates partial ∆-relations that detect changes to a derived relation given changes to one of the relations it is derived from. The technique is based on the assumption that the number of updates in a transaction is usually small and therefore only small effects on rule conditions will occur. Thus, the changes will only affect some of the partially differentiated relations. The partial ∆-relations are optimized using cost based query optimization techniques. Changes are propagated in a network at a check phase usually at commit time (deterred rules). The propagation algorithm uses bottom-up, breadth-first propagation to correctly and efficiently propagate both positive (insertions) changes and negative (deletions) changes. The technique does not assume permanent materializations, but this can be added as an optimization option. "Reactive Integrity Enforcement in Active Databases" Michael Gertz, University of Hannover, Germany It is well accepted that active databases provide a suitable framework to implement efficient integrity maintaining subsystems by the use of triggers. Recent approaches suggest to utilize also triggers to perform repair actions in case of constraint violations instead of rolling back a whole transaction. Since these approaches mainly follow fixed and automated strategies the designer often has to refine or even to revise already derived repairing triggers to incorporate more semantical knowledge on the application. Herewith often the efficiency of the derived triggers decreases. The main problem on repairing constraint violations is that even simple integrity constraints can be violated through several db operations and also often a multitude of possible repair actions exists. We argue that modelling the reactive behavior as a design task should be clearly separated from the automated derivation of respective triggers. For this we provide an expressive declarative specification language for repair actions which includes the specification of rollbacks of single violating operations, attribute replacements on violating tuples, additional modifications (propagations) and exceptional cases for constraint violations. The advantage of reaction specifications is that violations can be analyzed and repair actions then may depend on respective results. Notwithstanding, it is undisputed that modelling the reactive behavior or constraint violations builds a complex design task which should be supported by design tools tailored to analyze constraints, reactions and their dependencies (e.g., a repair action for a constraint can violate another constraint). Although we provide a clear concept on modelling the behavior we focus our further work on more sophisticated analyzing techniques of constraints and corresponding reactions. Finally, for triggers automatically derived from constraint and reaction specifications we want to develop tools to simulate and debug repair processes in order to provide the designer a complete framework to enrich his/her application by efficient repair actions (triggers) on violations of the specified constraints. "Active Rules in the Context of Access Control" Dirk Jonscher, Klaus R. Dittrich, Universität Zürich, Switzerland Concerning authorization active rules are a special case of protection objects, and it has to be specified who is allowed to create, alter, drop and/or enable/disable an active rule. The chosen scheme must be compatible with the security policy of the system. In particular, it has to be taken into account that rules are usually associated with other protection objects (relations or object types) which may have been created by a different user possibly having a control-right for this object. Furthermore, active rules (condition and action part) are units of activity. Thus, it has to be verified whether the operations executed by a rule are permitted or not. The following approaches are possible: the rule inherits the access rights of its creator/owner (the standard for most existing systems); the rule inherits the access rights of its "invoker" (probably not very useful); or a rule is an authorization unit of its own and can directly get the required access rights. The latter is the most preferable approach, because it allows for a seamless integration of authorization for active rules into existing access control policies. This way it is possible that different users design an active rule and authorize it to be executed as production data. Besides access control for active rules, there is another interesting aspect, namely access control with active rules. Active rules are powerful mechanisms to realize elaborated security policies. Two examples are Chinese Wall policies (Brewer/Nash-Model) and the realization of duties (or obligations). Both policies depend on the reaction to events. Chinese Wall policies are history-dependent access rights. A user is free to choose which data he/she wants to access. However, once an access has been done, accesses to other data which belong to a conflict of interest domain (e.g. data of a competing company) are forbidden. An active rule can be used to monitor the access behaviour of users and react with a proper authorization (revocation of permissions or granting of prohibitions). Duties are special access relationships where a user is obliged to execute a transaction under certain circumstances. These transactions require human interaction such that they cannot be automatically scheduled. However, the system can monitor the fulfilment of a duty and can schedule a contingency action in case a duty is not fulfilled. "SENTINEL: An object-oriented active DBMS" Sharma Chakravarthy, University of Florida, Gainesville Making a database System active entails not only developing an expressive event specifica- tion language (with well-defined semantics), algorithms for event detection, but also a viable architecture that is extensible. Sentinel adbms is an attempt at this. Sentinel uses Snoop as the event specification language consisting of not, or, and, sequence, any, A, A*, P, and P* event operators. We support various event consumption modes (referred to as context in Snoop), such as unrestricted, recent, chronicle, continuous, and cumulative. These event-consumption modes have different storage and computational requirements. We use the execution semantics of HiPAC slightly tailored to the object-oriented enviroment (e.g. class level rules vs. instance level rules, subscription and notification mechanism). A nested transaction model is supported and is used to implement serializable execution of rules. Sentinel is built using open OODB (from TI, Dallas) and Exodus (from Univ. of Wisconsin, Madison). Currently, a planning and a financial application has been implemented to demonstrate the functionality supported by Sentinel. They will be expanded to reasonable size applications in the immediate future to perform benchmarking and performance analysis of the implementation. "Modelling Events and Reactions for External Active Database Applications" Heinrich Jasper, Universität Oldenburg, Germany Whereas triggers and ECA-rules have been studied in detail for internal database applications, e.g. integrity checking and auditing, there is a lack of experiences with external, real world applications. Our research at Universität Oldenburg suggests that an explicit notion of EVENT (and reactions thereof) should be used within the huge amount of modelling activities necessary for real world applications. Although events are already incorporated in several modelling techniques (OMT, Martin/Odell,...) they play a minor role in these. In opposite to that, we want to "lift" the EVENT to a first class entity w.r.t. modelling, just as objects are in the aforementioned techniques. From object and event structures the behaviour (methods for objects, activities for events) is derived and subsequently validated. Gradually refinements lead to an object-event-behaviour model of the application domain that can be transformed into ECA-rules of an underlying active (database) system. "Active Databases: A Perspective" Umeshwar Dayal, Hewlett Packard Laboratories, Palo Alto, California, USA Over the past ten years or so, active databases have matured to the point where there is now an active research community. The first research prototypes have been built. Some rudimentary functionality has made its way into products, and is reflected in emerging standards such as SQL3 and OMG. This is a good time to reflect on where the field should be headed. In my opinion, there has been altogether too much emphasis on rule models and languages, with a broad spectrum of execution semantics. Instead of inventing yet others with more and more esoteric features, this community should turn its energies towards engineering concerns. There is a dearth of experience in building real applications using active databases. Much work needs to be done in developing methodologies and tools for design and analysis of active database applications. As we push more semantics and functionality into the database system, where is the boundary between the application program and the database? How are the active capabilities made available through API’s and user interfaces? Active database systems are different from large scale production rule systems or other kinds of expert systems (which might profit from active database support). What are the canonical active database applications? More work also needs to be done on architecture, implementation, and performance issues; and in defining benchmarks. The technology must enter the mainstream of DBMS practice, so that "active database systems" are no longer a curiosity; rather, all DBMSs must exhibit "active" capabilities. Finally, we must face the challenge that applications in the real world will be constructed and deployed in open, distributed computing environments that include a variety of components and services such as database managers, transaction managers, naming services, security services, workflow services, event monitoring and notification services, and so on. Some of the functionality currently bundled in DBMSs will be unbundled and provided as components and services, together with other functions not typically associated with DBMSs. Active DBMSs will have to become citizens of such environments if our technology is to become widely used in practice. "An active object oriented DBPL" Claudia Roncancio, Bull & University of Grenoble, France We are concerned by the problem of writing a complete database application. We consider that it is essential to provide the programmer with a powerful declarative query language as well as with a turing complete language, for updates and complex calculations. We propose Paplom which is an OO DBPL extended with active and passive rules. The language offers a smooth combination of an imperative language and a (declarative) deductive one. The model proposed includes classes and modules. Classes include as usual, the structure and behaviour of objects as well as the "active behaviour" of the objects. The active behaviour is expressed as ECA rules which can react to any event concerning the objects (even its private part). Modules offer a means for structuring applications. A module includes global variables, operations (which are untargetted to a specific class) and active rules. The definition of active rules in classes as well as in modules allows to organize rules according to the role they play. For instance, particular users can customize the "rule set" used by their application, by defining its rules in the modules used by their applications. With our proposal we are not pushing the triggers out of the DB scheme but we are offering a means for defining database triggers and application triggers in an uniform way. We consider problems of supporting active rules in an OO context as the one related to complex structures and inheritance. We also propose a new "activation mode" for triggers which allows to support a set oriented execution of rules in the object context. The implementation of (Active) Paplom is in progress and our main goal is to provide good performance. Note: this work is done together with P. Dechamboux. "Object Model Extensions for Active Design" Thomas Grotehen, Klaus R. Dittrich, University of Zürich, Swiss Bank Cooperation During oo-sw-development application specifications are expressed in terms of an object model. The "active" part of the specification may be expressed in a rule language like SAMOS. It is not a trivial task to define such a specification, so conceptual object models sometime are used as a tool to build a higher level specification that is mapped to a concrete (logical) object model. Most of these conceptual models do not provide for concepts that can express situations the system has to recognise and react on. We propose an extension for conceptual object models that can be used to express such situations and the connections to actions. The basic modelling concepts in our approach are event and condition nodes that can be connected to methods via "event vertices" (conjunction, disjunction, negation, sequence, ...). We prefer the "unary" approach: one tool - the object model, one product - the oo-schema. Up to now we have not seen any reason why our concepts should be restricted to database modelling. "The Architecture of the SAMOS Prototype" Stella Gatziu, Klaus R. Dittrich, Universität Zürich, Switzerland We investigate the architecture at the active ooDBMS SAMOS. SAMOS provides a rule definition language as a means to specify ECA-rules. Events may be specified in a primitive way (method, time, transactions and abstract events) or in a composite way using a set of event constructors such as sequence or negation. The SAMOS prototype consists of two blocks: - an underlying object-oriented DBS (ObjectStore) - an add-on layer on top of ObjectStore implementing the active functionality The add-on layer consists of: - an analyzer for compiling event and rule definitions - a rule manager for the retrieval of information about events and rule definitions. In an object-oriented environment rule and event definitions are represented as objects. - a detector for primitive events. For example, time events are detected using the cron mechanism of UNIX. Or, method events are detected by recompilation of the implementation of the methods of interest. - a detector for composite events. In SAMOS we use the model of Coloured Petri Nets for the modelling and the detection of composite events. - a rule execution component for condition evaluation and action execution. Using the transaction model offered by ObjectStore we identify the problem and the restrictions concerning, e.g., the support of decoupled coupling mode. “Some requirements to make active databases a viable technology” Alex Buchmann, Technische Hochschule Darmstadt, Germany Basic active database capabilities are appearing in relational products. At the same time, the first active object-oriented prototypes are emerging. A variety of issues must be resolved to make active database technology useful to a wide community of users and to accelerate the development of new and more robust object-oriented active DBMS prototypes. The entry price to do meaningful work in active object-oriented DB systems is rather high since a stable oo-platform is required. Therefore, an extensible platform into which individual groups can insert partial developments is required. We require further a reference architecture that can serve both as a way to describe active databases systems in terms of the level of functionality offered, and to identify necessary interfaces to be provided by the DBMS vendors to layer active capabilities on top of existing DBMSs. As prototypes are emerging it becomes increasingly necessary to define yardsticks and benchmarks for testing. Given the difficulty in defining provable correctness criteria for full-fledged active DBMSs we will depend on well-defined test-suites. In order to make this technology widely acceptable to users we require active DB design methodologies and tools. Unless we support the users early on we run the risk of causing disillusionment with this potentially powerful technology. “Active DBS and RTDBS: bringing both camps together” Holger Branding, Technische Hochschule Darmstadt, Germany Many applications proposed to be gaining from active DBMS functionalities need support, first, in modeling the intended temporal behavior, and second, the enforcement of modeled temporal behavior. I try to bring together the two camps of real-time DBS and active DBS. Both fields are rather new research issues. They lack even a common understanding of basic notions. Apart from these difficulties there are some basic functionalities that turned out to be essential to build efficient and real working systems that are capable to support real applications. Combination of the two fields involves, first, to develop a RTDBMS with restricted functionality, i.e. a restricted data model, or reducing unpredictable behavior by restricting to main memory DBS, etc. In a second step, a restricted DBMS is enhanced with active capabilities. As in RTDBS, the functionality must be restricted to control complexity and predictability. It is essential to derive cost formulas in order to provide a more predictable temporal behavior than it can be found in such systems nowadays. In the active part the event set to which rules react must be restricted. Both fields need the support of the underlying operating system because consumption of time is due to the use of resources, namely CPU, main memory, and I/O. ”Designing an Active Database System” Jürgen Zimmermann, Technische Hochschule Darmstadt, Germany In order to build an active DBMS we first tried to use an underlying object-oriented database system and made several negative experiences. One problem is the detection of method events. A typical solution is developing a preprocessor which wraps all methods. This is both inefficient and sometimes incorrect. When overridden methods in their body call the original method two wrappers have to be passed causing the creation of two events. To avoid the problems of a layered architecture we use Texas Instruments’ Open OODB and have access to their sources. Now we’re able to adapt both the transaction management and the method invocation mechanism to the requirements of an active DBMS. Another subject of the REACH project is having high parallelism inside the DBMS, namely in composing events and in firing rules. Therefore, we use the new operating system Solaris offering threads which yield better performance results than conventional child processes. Firing rules in parallel is realized in firing one rule in two subtransactions running in one Solaris thread. Since rule firing starts with a subtransaction for condition evaluation having typically read-only methods we are working on optimistic synchronization between sibling subtransactions. A rule’s 2nd subtransaction executes the action associated to the rule. The next step to enable/offer different synchronizations we’ll extend TI’s Open OODB with several transaction managers which provide the functionality required by active DBMS. ”Active mediator systems in a heterogenous environment” Thomas Kudraß, Technische Hochschule Darmstadt, Germany A heterogeneous database system can be considered as a well-suited application of active database capabilities. A federation of multiple database systems can be viewed as a space of distributed active objects. The task of a mediator within a federation is to enforce global integrity constraints as well as to control long-running activities spanning database boundaries. The necessity arises to cope with the problem of the autonomy of the component systems (e.g. design autonomy, execution autonomy). A promising attempt is to define autonomy in terms of the functions they are providing at their interface. It cannot be assumed that all participants of the federation offer all the needed hooks to allow the mediator to control every state of them in order to guarantee global consistency, for example, the occurrence of local updates as events that has to be monitored. A solution must be found how to handle the conflict between consistency conditions that have to be enforced globally and the autonomy the local systems still preserve. So there is a motivation for the notion of weaker consistency requirements that have to be expressed in multi database rules. The testbed to be implemented uses the active rule system of REACH based on the Open OODB database system with C++ as canonical data mode. The component systems comprise relational as well as object oriented systems (Sybase, ObjectStore). "An Algebraic Approach to Rule Analysis in Active Database Systems" Elena Baralis, Politecnico di Torino While active database systems are very powerful, developing even small applications can be a difficult task, due to the unstructured behaviour and unpredictable nature of rule processing. During rule processing, rules can activate and deactivate each other, and the intermediate and final states of the database can depend on which rules are activated and executed in which order. It is highly beneficial if the rule programmer can predict in advance some aspects of rule behaviour. This can be achieved by providing a facility that statically analyzes a set of rules, before installing the rules in the database. Two important properties of rule behaviour are termination and confluence. A rule set is guaranteed to terminate if, for any database state and set of modifications, rule processing can not continue forever. A rule set is confluent if, for any database state and set of modifications, the final database state after rule processing is independent of the order in which activated rules are executed. We propose a generally applicable algorithm for determining when the action of one rule can affect the condition of another rule. This algorithm is useful for analyzing termination, since it can determine when one rule may activate another rule, and for analyzing confluence, since it can determine when the execution order of two rules is significant. Since we take an approach based on relational algebra, our method is applicable to most active database systems that use the relational model.
{"Source-Url": "http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2021/14974/pdf/DagSemRep-86.pdf", "len_cl100k_base": 12407, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 44594, "total-output-tokens": 13491, "length": "2e13", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.0004954338073730469, "__label__crime_law": 0.0004320144653320313, "__label__education_jobs": 0.0028076171875, "__label__entertainment": 8.809566497802734e-05, "__label__fashion_beauty": 0.0001856088638305664, "__label__finance_business": 0.0003361701965332031, "__label__food_dining": 0.0003299713134765625, "__label__games": 0.0005602836608886719, "__label__hardware": 0.0011234283447265625, "__label__health": 0.0006833076477050781, "__label__history": 0.00043129920959472656, "__label__home_hobbies": 0.0001181960105895996, "__label__industrial": 0.0007081031799316406, "__label__literature": 0.0002627372741699219, "__label__politics": 0.0002758502960205078, "__label__religion": 0.0005216598510742188, "__label__science_tech": 0.09814453125, "__label__social_life": 0.00013184547424316406, "__label__software": 0.0163421630859375, "__label__software_dev": 0.87451171875, "__label__sports_fitness": 0.0002419948577880859, "__label__transportation": 0.0006251335144042969, "__label__travel": 0.0002300739288330078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63296, 0.00458]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63296, 0.34134]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63296, 0.9206]], "google_gemma-3-12b-it_contains_pii": [[0, 207, false], [207, 2234, null], [2234, 3923, null], [3923, 5199, null], [5199, 8523, null], [8523, 12215, null], [12215, 14948, null], [14948, 18099, null], [18099, 21682, null], [21682, 24785, null], [24785, 27967, null], [27967, 31188, null], [31188, 34161, null], [34161, 37291, null], [37291, 40018, null], [40018, 43637, null], [43637, 47560, null], [47560, 51220, null], [51220, 54495, null], [54495, 57440, null], [57440, 61297, null], [61297, 63296, null]], "google_gemma-3-12b-it_is_public_document": [[0, 207, true], [207, 2234, null], [2234, 3923, null], [3923, 5199, null], [5199, 8523, null], [8523, 12215, null], [12215, 14948, null], [14948, 18099, null], [18099, 21682, null], [21682, 24785, null], [24785, 27967, null], [27967, 31188, null], [31188, 34161, null], [34161, 37291, null], [37291, 40018, null], [40018, 43637, null], [43637, 47560, null], [47560, 51220, null], [51220, 54495, null], [54495, 57440, null], [57440, 61297, null], [61297, 63296, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63296, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63296, null]], "pdf_page_numbers": [[0, 207, 1], [207, 2234, 2], [2234, 3923, 3], [3923, 5199, 4], [5199, 8523, 5], [8523, 12215, 6], [12215, 14948, 7], [14948, 18099, 8], [18099, 21682, 9], [21682, 24785, 10], [24785, 27967, 11], [27967, 31188, 12], [31188, 34161, 13], [34161, 37291, 14], [37291, 40018, 15], [40018, 43637, 16], [43637, 47560, 17], [47560, 51220, 18], [51220, 54495, 19], [54495, 57440, 20], [57440, 61297, 21], [61297, 63296, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63296, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
c383bce8faf48308a5284c3d775debf98b9b905e
Monitoring-Aware IDEs Jos Winter jos.winter@adyen.com Adyen N.V. Amsterdam, The Netherlands Mauricio Aniche M.FinavaroAniche@tudelft.nl Delft University of Technology Delft, The Netherlands Jürgen Cito jcito@mit.edu Massachusetts Institute of Technology Cambridge, MA, USA Arie van Deursen Arie.VanDeursen@tudelft.nl Delft University of Technology Delft, The Netherlands ABSTRACT Engineering modern large-scale software requires software developers to not solely focus on writing code, but also to continuously examine monitoring data to reason about the dynamic behavior of their systems. These additional monitoring responsibilities for developers have only emerged recently, in the light of DevOps culture. Interestingly, software development activities happen mainly in the IDE, while reasoning about production monitoring happens in separate monitoring tools. We propose an approach that integrates monitoring signals into the development environment and workflow. We conjecture that an IDE with such capability improves the performance of developers as time spent continuously context switching from development to monitoring would be eliminated. This paper takes a first step towards understanding the benefits of a possible Monitoring-Aware IDE. We implemented a prototype of a Monitoring-Aware IDE, connected to the monitoring systems of Adyen, a large-scale payment company that performs intense monitoring in their software systems. Given our results, we firmly believe that Monitoring-Aware IDEs can play an essential role in improving how developers perform monitoring. CCS CONCEPTS • Software and its engineering → Integrated and visual development environments; KEYWORDS software engineering, devops, systems monitoring, runtime monitoring, Integrated Development Environment, IDE. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. 1 INTRODUCTION Monitoring provides information about the runtime behavior of software in the form of logs and has been used to understand large-scale systems in production. The analysis of logs is a widespread practice that has been studied in many different contexts. By leveraging log data, researchers were able to help development teams with process mining [15, 29, 53], anomaly detection [9, 24, 28, 60, 61], passive learning [57], fault localization [38, 65], invariant inference [10], performance diagnosis [33, 49, 50, 62], online trace checking [3], and behavioral analysis [4, 43, 62]. However, understanding runtime behavior of deployed software is an activity that has been classically associated with operations engineers. In recent years, practices and culture of development and operations have evolved to unify their responsibilities (often referred to as DevOps). Teams no longer solely focus on either development or operations; rather, these responsibilities are more and more intertwined and unified [6, 21, 46]. Monitoring is one fundamental activity in this congregation that enables a real unification of both sides. Developers see the analysis of monitoring information as part of their primary responsibilities, and perform it seamlessly with their development tasks. Interestingly, monitoring mainly happens in monitoring tools (e.g., Kibana), whereas software development mainly happens in an Integrated Development Environment (IDE). The current situation leads to increased context-switching [18] and splits attention effects that increase cognitive load [13]. If developers have to leave the IDE to do some other development-related task, then, one might say that the integrated development environment has failed. Monitoring-Aware IDEs. We propose to integrate operational aspects into the workflow and context of software development tasks by developing the concept of Monitoring-Aware IDEs. If IDEs were to provide seamless support for monitoring activities, we hypothesize that developers would better perform development tasks, such as understanding the reason of a bug, or how a new deployed version behaves in production. To validate the proposal, we implemented a prototype for a Monitoring-Aware IDE and integrated it into the workflow of 12 developers from 7 different teams, and evaluated it in a one-month field experiment at Adyen, a large-scale payment company which produces around 40 billion lines of log data per month. Adyen follows DevOps practices and performs intense monitoring in their software systems. Our results indicate that Monitoring-Aware IDEs can provide essential benefits in modern large-scale software development. Developers made repeated use of the monitoring features to perform various development activities they would have not performed without our approach. Moreover, the provided information supports their development tasks in different ways, such as to better understand how their software works, how stable and performant their implementation is, and even to identify and fix bugs. Finally, their overall perception is that, while a Monitoring-Aware IDE does not replace their existing monitoring systems entirely, it helps them in reducing cognitive load and saving time by avoiding constant context switches between monitoring tools and their IDE. The main contributions of this paper are: - A proposal outlining how Monitoring-Aware IDEs can support developers in better performing monitoring and DevOps by incorporating monitoring data into the workflow of working with source code (Section 3) - A 4-week field experiment that brings evidence on the usefulness of Monitoring-Aware IDEs to monitoring and DevOps teams (Sections 5 and 6). 2 BACKGROUND In this section, we describe existing related work on the field. More specifically, we dive into the DevOps movement, log analysis and monitoring techniques as well as enhancements researchers have been proposing to IDEs. Next, we present Adyen, our industry partner (and our case study), and how they have been applying monitoring and DevOps within their development teams. We also explain why Adyen serves as a perfect case for this study. 2.1 Related Work The DevOps movement. Different people define DevOps in different yet similar ways. Hüttermann [32] defines DevOps as “practices that streamline the software delivery process, emphasizing the learning by streaming feedback from production to development and improving the cycle time.” DeGrandis [20] affirm that “The [DevOps] revolution in the making is a shift from a focus on separate departments working independently to an organization-wide collaboration – a systems thinking approach.” Walls [55] says that DevOps is a “cultural movement combined with a number of software related practices that enable rapid development.” Bass et al. [6] define DevOps as “a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production, while ensuring high quality.” To Loukides [38], DevOps is about integrating the infrastructure and the development teams: “Rather than being isolated, they [infrastructure team] need to cooperate and collaborate with the developers who create the applications.” Indeed, the movement is becoming more and more popular among practitioners. A 2016 survey with 1,060 IT professionals [45] indicates that its adoption increased from 66% to 74%, especially in the enterprise world (in comparison with 2015). However, its adoption is still not as smooth as expected. Smeds et al. [47], after a literature review and interviews with experts, affirm that an important difficulty for its adoption in industry is related to its unclear definition and the company’s expected goals with the adoption. Monitoring tools in industry. There are a vast amount of monitoring tools that have originated in industry. Most tools display metrics (often extracted from information in logs) in dashboards that are customizable in different dimensions (e.g., visualization, groupings, alerts) and are searchable. Probably the most prominent open-source toolchain in the context of monitoring is the ELK stack1 (ElasticSearch, Logstash, Kibana) where logs from distributed services are collected by Logstash, stored on ElasticSearch, and visualized in Kibana. Another well-known open-source dashboard is Grafana2, that is mostly used to display time series for infrastructure and application metrics with an extensible plugin system. Commercial counterparts to these services include, for instance, Splunk3, Loggly4, DataDog5, and many more. The critique to the common dashboard solutions in current practice is that the amount of different, seemingly unrelated, graphs is overwhelming and it is hard to come to actionable insights [16]. Logging analysis and visualization. Log data is vastly rich, and thus, several analysis techniques have been proposed. Aiming at failure detection, Reidemeister et al. [44], based on previous logs, train a decision tree to detect recurrent failures. Similarly, Fronza et al. [23] uses SVM, Lin et al. [36] use clustering algorithms, and Bose and van der Aalst [12] exploit associative rule mining to discover failure patterns in event logs. While the above techniques are good in detecting previously known failures, others focus on detecting anomalies (i.e., failures not seen before). Clustering algorithms are commonly used for such [34, 36]. Logs are also used to build models of the software system. Tools such as Synoptic [10] and DFASAT [30, 57] devise finite state machines that represent a software, based on its logs. And given that logs are often ordered in a timely manner, related work also has explored temporal invariant inference [10, 41]. Finally, given that logs are often not easy to be understood as they are, visualizations that aim to support reasoning of runtime behavior have also been proposed. Examples of such work are visual depictions to better understand performance issues [3, 11], to understand how the different components of a distributed system behave and/or relate to each [1, 42, 63], and to visualize the different nodes of a cluster by means of a city landscape metaphor [22]. Augmenting existing IDEs. Work that is conceptually closest to our approach are development environments that augment source code with runtime information. Lieber et al. [35] augment JavaScript code in the debug view in the browser with runtime information on call count of functions asynchronous call trees to display how functions interact. Other work focuses on augmenting method definitions in the IDE with in-situ visualizations of performance 1https://www.elastic.co/webinars/introduction-elk-stack 2http://grafana.org/ 3https://www.splunk.com/ 4https://www.loggly.com/ 5https://www.datadoghq.com profiling information [7, 16, 17]. Hoffswell et al. [31] introduce different kinds of visualizations related to runtime information in the source code to improve program understanding. Lopez and van der Hoek [37] augmented IDEs to warn developers, on a line-by-lines basis, about the volatility of the code they are working on. Our approach is the first to integrate information and traceability links from production logs into the source code view. This enables a more general-purpose approach to reasoning about production behavior that is guided by signals put in place by developers themselves (log statements). 2.2 Monitoring and DevOps All observations in this research are based on the teams that follow the DevOps model at Adyen, a large-scale payment company that provides services for more than 4,500 companies all around the world. Adyen had a transaction volume of $120 billion dollars in 2017. The distributed software systems that run their entire business produced around 40 billion log lines solely in July 2018. Due to their scale and sensitive business market, monitoring is a vital activity at Adyen. Adyen follows DevOps practices as part of their culture, and the barriers between development and production have been getting smaller and smaller over the years. Developers of all teams are responsible for the monitoring of their systems and are supported by a dedicated monitoring application, whose focus is to build any customization a team might need to conduct better monitoring. Thus, at Adyen, monitoring is a vital task for all developers. Due to their efforts on monitoring over the last years, we firmly believe that Adyen offers an exemplary place for software engineering researchers to study (and evolve) monitoring and DevOps practices. And for this research, more specifically, to study the benefits of Monitoring-Aware IDEs. Adyen’s monitoring and DevOps practices. In Figure 1, we summarize Adyen’s monitoring and DevOps practices. The model contains ten practices (P1..P10) grouped in six broad themes. Throughout the following text, we use circles to connect the model in the Figure to the explaining text, e.g., (P1) refers to practice number 1. At Adyen, developers are not only responsible for testing their features before release, but to follow up and monitor how their systems behave when released to production (P1). Does it work as expected? Does it meet the performance requirements? Given that predicting how a large-scale software system will behave in production, monitoring takes a major role during release deployments. Even with short development cycles, large portions of new source code are released continuously to production. During release, developers intensively focus their monitoring efforts on how their newly implemented features behave in production (P2). Log data from the previous versions are often used as a baseline. Exceptions that never happened before, particularly on new source code, or exceptions that start to happen more often than in previous versions, often trigger alarms to developers who then focus on understanding why that is happening. Interestingly, developers not only care about exceptions in their software systems, but also about how their systems impact the overall business, e.g., is my system bringing the anticipated return on investment (ROI) to my company? Developers often work closely with data science teams, which also leverage the richness of the log data to extract insightful business knowledge. It is not uncommon for developers to have tasks in their backlog that aim at better supporting data science teams (P3), e.g., by adding more information to existing log statements. In fact, given that developers try as much as possible to log any useful information, the amount of log statement lines in the source code is significant. Adyen, more specifically, have around 30k log statements throughout its source code base. In other words, with log statements playing an essential role in software systems, maintaining logging code (e.g., improving or removing log statements) is a recurrent activity (P4). Developers make use of several tools to support their constant monitoring activities. These tools are vital to helping them deal with the large-scale nature of their systems. Besides the fact that these systems produce large amounts of log data, they are also often distributed, which require teams to make use of existing log storage, aggregation and visualization tools (P10), such as the ELK stack (see Section 2.1), or even build their own tools and automated alarms (P9). Moreover, developers also monitor their entire environments (P3), such as the health of their Linux servers, databases, and servers. Due to the complexity of their software systems, monitoring data is also fundamental for developers to identify functional (P5). stability and performance \((P5)\) issues. Again, monitoring data provides developers with not only unexpected and new exceptions, but also with information that helps them debug and track the problem. When it comes to performance issues, developers often measure the time it takes between log messages as an indication of a possible problem. Moreover, developers also use monitoring data as a way to trace and comprehend complex business processes \((P5)\). In practice, no developer is able to understand every single detail of the entire business completely. A developer might learn that payment transactions always go first to the Risk Management system, and then later to the Reporting system, by reading log data. 3 MONITORING-AWARE IDES In modern teams following a DevOps model, developers go back and forth between monitoring data and the source code to reason about their software systems. Even with the current state-of-the-art monitoring and IDE/development tools, developers still struggle with connecting the two worlds. The current situation leads to increased context-switching \((18)\) and split attention effects \((13)\) that increase cognitive load. We theorize that, for developers to be better equipped to deal with monitoring and DevOps practices, IDEs and monitoring systems should be connected (giving rise to what we will call, Monitoring-Aware IDEs). A Monitoring-Aware IDE provides developers with an integrated view of both the implementation of their software systems and monitoring information. Developers need not to go out of their IDEs to know whether an exception that they just decided to throw happened ten times in the last week, or that the time between two log statements has been increasing continually. Based on what we observe at Adyen, we conjecture that such an IDE would: 1. **Timely Integrated Feedback**: Monitoring data, e.g., how often a log statement or an exception happens in production, should be timely available at the Monitoring-Aware IDE, so that developers can make data-driven decisions based on the most recent data (and without the need of opening the monitoring system for that), 2. **Traceability**: There should be a direct connection/link between the monitoring information and the source code, in case one tool does not contain the required information at that moment. The source of monitoring information (e.g., a log statement or an exception) can be found based on monitoring information, and monitoring information can be found based on its source. 3. **Search Capability**: Monitoring information should be searchable in the IDE, e.g., the classes with the highest number of exceptions. ![Figure 2: Interaction design of a Monitoring-Aware IDE. Numbers on the left bar indicate how often a log statement or exceptions happen in production. Developers can ask for more detailed monitoring information (box on the right) or, as last resource, go to the real monitoring system and observe the full data there. Finally, search options at the bottom of the IDE (e.g., filter by class name, order by exception frequency).](image) 4 MONITORING-AWARE IDE PROTOTYPE To empirically study our proposal, we built a prototype of a Monitoring-Aware IDE. We set the following goals for the prototype: 1. it should deliver enough value to Adyen developers, so that they would benefit from this study, 2. to be as non-obtrusive as possible to Adyen developers, so that they would not feel the burden of using an “unknown” tool, 3. to deliver enough features so that we, as researchers, could empirically validate our Monitoring-Aware IDEs proposal. We highlight the fact that this tool was also developed inside in partnership with Adyen, incorporating iterative feedback (from February to June 2018). Throughout its five months of development, our prototype received feedback from several Adyen developers after beta versions. The first three authors of the paper discussed all their suggestions and whether they were useful or essential for the prototype. In this paper, we report the final version of the prototype. Our tool collects monitoring data that is currently available in Adyen’s monitoring systems (ELK stack). Adyen allowed us to collect new data from their monitoring systems every 15 seconds, which gives near real-time information. We trace back the origin of a log message to its original log statement in the source code using heuristics (Adyen does not log class name and line number that originates the message due to performance reasons). More specifically, we generate regular expressions based on every log statement of the source code and matched them against the log message that comes from the monitoring system, following Xu’s et al. work \([61]\). We observed that the link, as also reported by Xu et al., indeed happens with high accuracy \((97\% \text{ after an evaluation in 100k log messages})\), which implies that our tool is accurately able to show monitoring information in the source code. Finally, we show the monitoring information inside IntelliJ, the Java IDE that is used... at Adyen, by means of a plugin that we developed. We discuss the details of the prototype’s architecture in Section 7.1. In Figure 2, we present an interaction design of how the tool presents information to developers. The tool supports all the requirements we set out in Section 3. Whenever developers open any class in their source code, our tool shows monitoring information near all the log statements and thrown exceptions. The information is continuously extracted from Elasticsearch, the underlying document database Adyen uses to store the monitoring data of their production systems. The numbers near every log statement in the left box show how often they have been triggered in the last month. When developers hover with their mouse, our tool shows a summary of monitoring information about that statement (currently, how often that statement was executed in the last hour, 24 hours, and month). To facilitate the switching to the monitoring tooling with more detailed information, we also provide a direct traceability link to the dashboard of that specific class and log statement. Finally, the tool also provides developers with search options, such as filter by class name, and order by exception frequency. 5 FIELD EXPERIMENT In the remainder of this paper, we take the first step towards empirically understanding the value of Monitoring-Aware IDEs we posed in the previous section. To that aim, we propose three research questions: RQ1. How do developers interact with a Monitoring-Aware IDE? RQ2. What impact does a Monitoring-Aware IDE bring to software development teams? RQ3. What are the developers’ perceptions about the usefulness of a Monitoring-Aware IDE to support their monitoring practices? Given the complexity of simulating an environment that requires constant monitoring, such as the likes of Adyen, we opted for a field experiment. According to Stol and Fitzgerald [48], a field experiment refers to an experimental study conducted in a natural setting with a high degree of realism. In this strategy, the researcher manipulates an effect of some kind. Also, according to Stol and Fitzgerald, the natural study setting is realistic, but subject to confounding factors that can limit the precision of measurement. To that aim, we make use of quantitative and qualitative data that we collected after providing 12 developers from Adyen with a Monitoring-Aware IDE prototype for four weeks. In summary, our field experiment happened as follows: 1. We recruited 12 developers from Adyen (the selection criteria is explained in Section 5.2), installed the prototype in their IDEs, and gave them a short tutorial on what the prototype does and how it works, 2. The 12 participants used our Monitoring-Aware IDE prototype for four weeks to perform their daily tasks, 3. We collected information about the usage of the prototype, automatically via telemetry, 4. We collected information about the impact of the tool through a weekly survey, 5. At the end of the four weeks, we performed a final survey with the 12 participants to understand their overall perception of the benefits of a Monitoring-Aware IDE. 5.1 Methodology Data collection and analysis. We added instrumentation to our prototype that collects the following interactions between the developer and the tool: (1) when the developer opens a file containing source code for which monitoring data exists, (2) when the developer asks for detailed monitoring information in a specific line of code as well as how much time they spend on it, and (3) when the developers opt to navigate to the real monitoring system. To understand whether and how the IDE impacted developers in their development tasks (RQ2), we surveyed the participants weekly, asking about their specific interactions with the tool and what actions they took. We created surveys tailored for each developer. Based on all the usage data collected from our prototype during that week, we showed a list of all classes in which participants observed any monitoring information during that week. For each of these classes, participants had to answer questions about in what way the tool impacted (or did not impact) their work. We provided participants with a list of possible follow up actions that one could have taken after having analyzed the monitoring information. We devised this list of consequences in collaboration with Adyen developers (using their monitoring and DevOps practices as a basis, see Section 2.2 and Figure 1). We also give developers a free box where they can provide any other action. We iteratively monitored their open answers to improve our list. We also provided a “did not perform any action” option, so that participants would not feel obliged to choose any consequence. The final list can be divided into three categories: observations, code changes, and logging code improvements. • Observations: Insights into the behavior of their systems, based on monitoring data: (O1) Identified a bug, (O2) Identified performance issue, (O3) Identified security issue, (O4) Identified an issue in the log code, (O5) Understood the business process, and (O6) Understood the stability of the implementation. • Code changes: Production-code improvements, based on monitoring data: (I1) Fixed a bug, (I2) Improved code quality (refactoring), (I3) Improved code performance, (I4) Improved code security, and (I5) Implemented new functionality. • Logging code improvements: Improvements to the log code based on monitoring data: (L1) Improved log message, (L2) Changed log severity, (L3) Removed log line, and (L4) Added log line. Although participants may have worked in the same class and asked for its detailed monitoring information (maybe for different purposes) multiple times during the week, that class appeared only once in that week’s survey. We made this decision for two reasons: 1) we do not believe participants would have an accurate perception and memory for such a fine-grained survey, 2) the survey would be too extensive as we conjectured that participants would interact with a large number of classes a week. Nevertheless, we allowed participants to choose multiple actions for the same class, which --- *We can not show an actual screenshot of the tool being used as it would reveal proprietary information. would enable them to express multiple actions they might have taken in that class during that entire week. In addition, some of the possible interactions with our tool cannot be automatically collected by our prototype (e.g., we have no data to infer whether participants looked at the number we show in front of any log statement). Thus, at the end of the survey, we ask them whether the tool helped (or not helped) in any way that we did not ask before. **Post-questionnaire.** Finally, with the goal of augmenting and explaining the data we obtained employing the weekly surveys and the prototype, we asked participants to answer an open questionnaire at the end of the four weeks (P1, P6, and P7 were unavailable for the questionnaire). Questions were based on the results we had obtained until that moment. The questionnaire contained open questions about both their usage of the tool as well as the impact the tool had on their daily jobs. More specifically, about the tool usage, we asked: 1. Did you look at the monitoring data we provide at the left bar of your IDE? In your opinion, how important and/or useful are they? 2. We noticed that you went to the external monitoring while using our tool. Why did you go there? Concerning the impact of the tool, we asked the following two questions for each of the five most perceived benefits (represented by <X> in the following questions): 1. How does the tool help you in doing <X>? 2. How did you perform <X> before having a Monitoring-Aware IDE? What are the differences? Note that we use this post-questionnaire also as a way to collect perceptions on the comparison between using and not using a Monitoring-Aware IDE, given that establishing a control group is not possible in the context of our study. We use the questionnaire as a way to mitigate the possible threat, which we discuss in detail in Section 7.2. **Data analysis.** We applied descriptive statistics to all quantitative data we collected (i.e., usage data coming from the prototype and survey answers). We analyzed the post-questionnaire data using the following procedure: 1. To each of the questions in the questionnaire, we grouped similar answers in high-level themes. 2. Whenever a new theme was created, we revisited all the previous answers to that question, and evaluated whether it would better fit the new theme, 3. We stopped the process when there were no more themes to create. The first two authors were involved in the coding of the data and in deriving higher-level themes. We use the high-level themes as main topics of discussion in our Results section. **Ethical concerns.** We do not collect sensitive or private information from the developers or from Adyen in any of the steps of our field experiment. All the participants were aware of all the data being collected before joining the study. Besides, this field experiment was also approved by the Ethics Committee of Delft University of Technology. ## 5.2 Participants We invited 12 developers (from 7 different teams) to use our prototype for four weeks. We applied convenience sampling to find the 12 participants of our study. We made a general announcement at Adyen’s internal chat application explaining our study and prototype and asked for participants. All participants had to pass the following criteria: (1) more than one year of experience as a software developer, (2) more than six months of experience at Adyen, and (3) a frequent user of Adyen’s monitoring systems. We show participants’ profiles in Table 1. We asked participants to perform their regular development tasks using our prototype. Before the field experiment, we gave participants some time to try out the tool and learn how to use it. We highlight that, during these four weeks, we did not force or require developers to use our tool in any situation, as we wanted to observe their real-world behavior. ## 6 RESULTS ### 6.1 RQ1: How do developers interact with a Monitoring-Aware IDE? In Figures 3a and 3b, we show how much each participant interacted with the monitoring features of our Monitoring-Aware IDE. In the four weeks, developers opened 1,249 files that contained monitoring information, which represents 14% of all the 8,958 files opened throughout the four weeks. Inside these files, the IDE displayed data about 4,465 log statements. According to our post-questionnaire, the quick summary we provide near every log statement (i.e., the number of occurrences of that log statement in the last month, left bar in Figure 2) was perceived as useful by developers: such information enabled them to quickly observe whether <table> <thead> <tr> <th>Team</th> <th>Participant</th> <th>Development Experience (in years)</th> <th>Experience at Adyen (in years)</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>P1</td> <td>1.5</td> <td>0.5</td> </tr> <tr> <td>B</td> <td>P2</td> <td>4.5</td> <td>3</td> </tr> <tr> <td>C</td> <td>P4</td> <td>3</td> <td>2</td> </tr> <tr> <td>D</td> <td>P6</td> <td>5</td> <td>0.5</td> </tr> <tr> <td>E</td> <td>P7</td> <td>8</td> <td>4</td> </tr> <tr> <td>F</td> <td>P12</td> <td>7</td> <td>0.5</td> </tr> <tr> <td>G</td> <td>P5</td> <td>7</td> <td>1</td> </tr> <tr> <td></td> <td></td> <td>8</td> <td>2</td> </tr> </tbody> </table> Table 1: Profile of the participants in our study. Participants are ordered according to the number of interactions with the tool (P1 interacted the most, P12 interacted the least). there was any unexpected activity in that part of the system (P2, P12) and whether these problems were urgent (P3, P4, P5, P8, P9, P11). We observed that developers mostly focused on whether the numbers displayed were “out of expected ranges”, e.g., near zero or very high numbers. P2: “What matters to me is mostly if the number is zero or not. If it’s not zero and very high (e.g., 30K), I can tend to ignore it as it sounds like an ‘acceptable’ warning. If it’s a low number higher than 0 (e.g., 40) I would immediately like to check what’s going on. In this case, the actual number was not really important, I was just checking whether the count was higher than 0”. In 109 occasions, developers asked for more detailed monitoring information (i.e., the periodic distribution of times that log statement appeared in the log data), either directly in the Monitoring-Aware IDE itself (67 times) or visiting the monitoring tool using the link we provide (42 times). According to the post-questionnaire, developers also visited the actual monitoring tool to retrieve additional, more detailed information about the problem they were investigating, e.g., the stack trace of the problem (P4, P11), the values of certain variables (P3, P5, P12), and to get the log messages that happened before the error under investigation (P9). Interestingly, we observed that, at Adyen, developers have ownership of the features they build. Specific teams are responsible for their features, including their monitoring. This behavior can also be observed in our data. P12: “I myself go back to things I worked on from time to time as well.” We observed that monitoring the same class over time is a recurrent task. 50.46% of all interactions are part of a series of interactions in the same class in different weeks. In the post-questionnaire, when presented with these numbers, developers affirmed that recurrent monitoring is common due to the size of their systems, and to the size of the features they commonly build (P3, P5, P12), and that due to weekly deployments, they often go back to see whether their features are still working. 6.2 RQ2: What impact does a Monitoring-Aware IDE bring to software development teams? Together, participants completed 29 weekly surveys (out of 48 possible). Developers informed us that, in 45 opportunities, the usage of our Monitoring-Aware IDE had a positive impact on their software systems, which we show in Figure 4. We observe that developers took meaningful actions after observing monitoring data. 9 out of the 12 participants (P1-P9) had a positive consequence of using a Monitoring-Aware IDE. We notice that the three participants who did not observe any positive effects (P10-P12) were the ones with the least number of interactions with our tool (Figure 3b). There is a strong correlation between asking for detailed information and being positively impacted by our tool (Pearson correlation = 0.85, p-value=0.001). Understanding the business process through monitoring was the most common consequence of using our Monitoring-Aware IDE (15 times out of 45, or 33%). Moreover, understanding performance issues (5 times, 11%), as well as the stability of implementation (9 times, 20%) were also common consequences of using our Monitoring-Aware IDE. Developers accredited a few identification and bug fixing activities in their software systems (3 and 2 times, respectively) to our Monitoring-Aware IDE. Although identifying and fixing bugs did not happen as often as the understanding, we state that Adyen already has a mature software and, thus, we would not expect developers to identify and find several bugs that often, and any bug found has significant positive impact on their software. Finally, monitoring information also helps developers in maintaining... We observed that developers spent a significant amount of time reducing cognitive load when compared to the way they use their monitoring systems (P2, P3, P4, P8). Several of our participants affirmed to spending less time querying their monitoring systems, but it helped them in saving time and reducing cognitive load (P11). P11, specifically, said that they would need to write log statements whose sole purpose is to monitor security, which then our tool would help monitor. Finally, P2, P3, P8, and P11 pointed out the fact that Adyen has already a secure software and security issues do not often happen (and thus the likelihood of such an issue to happen during our field experiment was too small). We indeed conjecture that providing developers with traditional monitoring data only is not enough for them to observe security issues. A follow-up step for this work would be to study how security-related aspects would fit in a Monitoring-Aware IDE. ### 6.3 RQ3: What are the developers’ perceptions about the usefulness of a Monitoring-Aware IDE to support their monitoring practices? We observed that developers spent a significant amount of time going back and forth between their monitoring tools and their IDEs. Our overall perception was that this context switching was not productive. These observations were corroborated in our post-questionnaire. Developers affirmed that our Monitoring-Aware IDE did not replace their monitoring systems, but it helped them in saving time and reducing cognitive load when compared to the way they use to perform the same monitoring tasks before our tool. Several of our participants affirmed to spending less time querying their monitoring systems (P2, P3, P4, P8). In the post-questionnaire, developers also perceived other benefits in Monitoring-Aware IDEs that go beyond saving time (corroborating the results of RQ3). The instant (near) real-time feedback and the timely observations that our IDE offer enables developers to quickly identify possible bugs or bottlenecks (P3, P4, P5, P9, P11, P12). As we stated before, developers pay a lot of attention to the frequency of a log statement. Developers seem to implicitly formulate hypotheses on behavior in production. The frequency allows them to immediately make judgments about their hypotheses, i.e., whether this number seems to be “out of place” (e.g., near 0, or very large). P5: “An error or warning on its own doesn’t indicate a bug, but the number of time it gets triggered might. That’s why the tool is useful, to identify them.” P5 also provided us with a concrete example of how he was able to track a performance bug. P5: “It helped me find a situation where data had to be loaded explicitly, while it should have been preloaded.” Developers also see a positive impact in having monitoring data and logging code together (P2, P3, P5, P8). P2: “I still use Kibana as much as I used it before. I do like however the easy navigation from a log statement in IntelliJ to Kibana.” P3: “[Kibana] Requires a lot of manual work (writing query) for the other tools to actually notice errors that happen in a class that you work in.” Automatically establishing traceability by performing the link between the log message and the actual log statement as well as not having to query the monitoring tool also helps developers in following the flow of the source code more productively. P8: “Instead of having to follow the flow of the code by changing parameters on a Kibana search, the faster interaction with the plugin makes navigation smoother.” P5: “Now I don’t have to select a constant string from the log statement and hope to find it in the logs. Also I know earlier whether it is worth investigating further or not.” Finally, P8 also adds that the tool reduces his amount of context switching and that the tool also saves time when communicating about an error. P8: “If someone tells me about an error, I can find it in code easily [and] then find all related log instances” ## Figure 4: How our Monitoring-Aware IDE impacted our developers (N=45, 12 participants). 7 DISCUSSION In the following, we discuss the several challenges of building a Monitoring-Aware IDEs, and how we mitigate possible threats to the validity of this study. 7.1 Building Monitoring-Aware IDEs The Architecture of a Monitoring-Aware IDE. Designing such an IDE, from an architectural point of view, is worth discussing. Monitoring data can be extensively large (as with our industry partner) and any (local) data analysis might take too much, or even crash the IDE. Thus, Monitoring-Aware IDEs should be designed with scalability in mind. Our prototype has been shown to be scalable, and thus, we dedicate the next paragraphs to describe our architectural decisions. As we show in Figure 5, the monitoring data aggregator is a large process that runs in a separate server. It is where most of the expensive computational calculations (e.g., parse log data and generate templates, match the log data with its original log statement, update counters, pull up-to-date source code and refresh templates) happen. The Monitoring-Aware IDE is implemented as a plugin on top of an existing IDE, such as IntelliJ. The plugin mostly queries data from the aggregator and shows it to the developer. No heavy calculations happen in the IDE, which means developers do not suffer from possible slowness. On the other hand, we still see performance improvements to be done. Our current prototype queries Adyen’s monitoring systems every 15 seconds for new log data. Due to Adyen’s weekly release cycles, we also re-generate the regular expressions from their source code every week (and not at every new commit, as the generation process currently takes 35 minutes). We refresh the monitoring information in the developers’ IDEs whenever they open a class. While this currently gives near real-time up-to-date information to developers, we see the following steps as required to build a state-of-the-art real-time Monitoring-Aware IDE: 1. A streaming system in place that would stream log data as they come would be needed. Current industry solutions, like the ELK stack, offer such streaming. 2. The monitoring data aggregator would have to be able to handle the vast amount of regular expression matching that would happen for each log message. Matching regular expressions is neither a cheap or fast operation, particularly in languages like Java, which implements a Nondeterministic Finite Automaton (NFA) backtracking algorithm [19]. We see parallelization as a future requirement. 3. The monitoring data aggregator would have to generate new regular expressions from the source code every time a new deploys happens. Our current regular expression generator takes around 35 minutes to run in a codebase with a few million lines of code8, and can take even longer in larger codebases. 4. The IDE and the monitoring aggregator server would have to periodically communicate with each other, so that the IDE always has up-to-date data. The communication should happen in a way that developers do not notice any delays in their IDEs. The Importance of Logging Code. It is interesting to notice how important the quality of the log code is, and how much developers monitored and improved their quality. Throughout our study, developers fixed issues, added, and removed log code. The quality of log lines is indeed important, and researchers have been working on log code best practices. Fu et al. [25], for example, studied common logging practices especially focusing on where in the source code developers log. They conclude that common logging practices can be used to automate the logging process partially. Zhu et al. [64] implemented a tool which learns common logging practices and uses it to indicate positions that can 8 We are not allowed to disclose the total LOC of their systems. be improved by adding a log statement. Chen and Jiang [14] studied anti-patterns, which are defined as recurring mistakes in logging code, which may hinder the understanding and maintainability of log statements. Therefore, given that developers are now quite used to use static analysis tools (or linters) to spot bugs and maintenance issues [8, 51, 52], we suggest tool makers to start incorporating such log code quality measures in their linters. As an orthogonal aspect, Adyen uses Log4j, the most popular Java logging framework. Given their scale and the number of re- quests per second their servers receive, Adyen developers can not store the line number of the log statement that originates a log line that one sees in the monitoring tool. This is why we use Xu et al.’s heuristic [61] to link the log line back to its originating log statement. However, although the heuristic has worked well in our settings, our developers had a good amount of implementation work to adapt it to Adyen’s code style. From the practical point of view, we see, as future work, logging frameworks being able to log meta-information (e.g., class name, line number) with reduced costs. Custom-made Monitoring-Aware IDEs. Adyen uses Elasticsearch and Kibana dashboards to monitor their systems. We observed that developers pay a great attention to the number and type of exceptions that are going on in production as well as how the (new) code they wrote is behaving. The features of the Monitoring-Aware IDE prototype we study in this paper were based on these observations. However, developers of a different company may use monitoring systems in a different way, e.g., customized metrics or analysis. Monitoring-aware IDEs should also provide the extensive flexi- bility that current monitoring tools offer to developers. This means that the perfect IDE for one team might be different than the one for another team. This raises interesting points for IDE makers: how to make a monitoring feature that is generic enough for most developers to use, but customizable enough so that developers can obtain all the benefits that their current monitoring systems offer? Connected IDEs. We bring to attention the fact that we are used to seeing IDEs as standalone tools. After installation, they tend not to require any connections with the external world and developers can use it even without a network connection. In a world where IDEs are strongly connected with monitoring, both worlds should talk to each other. IDEs should not be standalone tools anymore. Researchers indeed have been studying cloud-based IDEs [2, 27, 54, 56, 59], and companies have been developing them (e.g., Ama- zon’s Cloud9). Cloud-based IDEs eliminate any need for specific hardware or operational systems, and try to increase collaboration and coding among developers. We argue that the ideas of cloud- based IDEs are in line with Monitoring-Aware IDEs. We conjecture that the fact that cloud-based IDEs naturally exist in a cloud environment would facilitate the development of the monitoring features we suggest in this paper. Fylaktopoulos et al. [26] noticed that runtime monitoring (or auditing, as authors call in their paper) is still an area not yet ex- plored in such IDEs. Authors discuss how developers are currently required to build their own debugging and auditing tools outside of IDEs. We suggest researchers to explore the connection between cloud-based and Monitoring-Aware IDEs. 7.2 Threats to Validity Internal Validity. (1) We use our prototype as a proxy to under- stand the impact of a Monitoring-Aware IDE in software develop- ment teams. As we present in Section 5, our Monitoring-Aware IDE prototype contains features that we derived from Adyen’s monitor- ing and DevOps practices (Section 2.2). We do not claim that our prototype fully represents and/or contains all possible features of an idealistic Monitoring-Aware IDE. We consider, nevertheless, our prototype sufficient enough to provide initial evidence that such an IDE can provide benefits to developers; (2) Participants P1, P6, and P7 were not available during the post-questionnaire. Nevertheless, we do not believe it affects in any way our conclusions, given that the answers of all other participants clearly converged; (3) We did not have an explicitly controlled baseline in our field experiment, as that would be impractical at Adyen’s realistic settings. Instead, we explicitly collected data about the developers’ perceptions on using and not using a Monitoring-Aware IDE in the final questionnaire, which enriched our analysis. We deem this setting to be appropriate given our goal to collect qualitative insights into how developers interact with our approach in their natural workflow. As future work, we plan to replicate our study in a more controlled setting, now that we have a better insight into what can/should be used as independent and dependent variables. External Validity. This entire research was conducted at Adyen, a large-scale payment company that deals with large amounts of sensitive data, produces large amounts of log data, and sees moni- tering as a fundamental activity. Although we diversified our field experiment with developers from seven different teams that rep- resent various kinds of development contexts, we can not claim any generalization. However, given the size, scale, and importance of the software built by Adyen, we believe this idea is worthy of further investigation. 8 CONCLUSIONS Software developers reason about the behavior of large-scale soft- ware systems in production by examining log data in external moni- toring tools. However, most of their software development activity happens in the source code view in the IDE. Leaving their devel- opment workflow in the IDE to understand production software behavior leads to increased context-switching and split attention effects that increase cognitive load. We propose to unify both development and monitoring contexts by developing a new concept of Monitoring-Aware IDEs. We inte- grate monitoring aspects into the workflow and context of software development tasks by incorporating frequency information on log statements into the source code view of an IDE. We implement this concept as an IntelliJ plugin and conduct a one-month field experiment with 12 developers in a large company, Adyen. Devel- opers using our approach in the field experiment reported that they were able to better understand business processes, identify performance issues and functional bugs, improve code quality, and better maintain their logging code. We firmly believe that Monitoring-Aware IDEs plays an essen- tial role in improving how developers interact with monitoring about production behavior and take action in development.
{"Source-Url": "https://pure.tudelft.nl/portal/files/62265924/fse19.pdf", "len_cl100k_base": 10652, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 41613, "total-output-tokens": 12550, "length": "2e13", "weborganizer": {"__label__adult": 0.0003345012664794922, "__label__art_design": 0.00022745132446289065, "__label__crime_law": 0.0002675056457519531, "__label__education_jobs": 0.0008244514465332031, "__label__entertainment": 4.00543212890625e-05, "__label__fashion_beauty": 0.0001423358917236328, "__label__finance_business": 0.00021004676818847656, "__label__food_dining": 0.00022125244140625, "__label__games": 0.0003819465637207031, "__label__hardware": 0.000614166259765625, "__label__health": 0.00033664703369140625, "__label__history": 0.0001500844955444336, "__label__home_hobbies": 6.520748138427734e-05, "__label__industrial": 0.0002453327178955078, "__label__literature": 0.00016999244689941406, "__label__politics": 0.00019252300262451172, "__label__religion": 0.0003533363342285156, "__label__science_tech": 0.003643035888671875, "__label__social_life": 7.939338684082031e-05, "__label__software": 0.004184722900390625, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00023603439331054688, "__label__transportation": 0.0003609657287597656, "__label__travel": 0.00016582012176513672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55350, 0.01638]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55350, 0.33789]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55350, 0.9357]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5084, false], [5084, 11359, null], [11359, 16218, null], [16218, 21301, null], [21301, 27617, null], [27617, 33338, null], [33338, 37147, null], [37147, 41242, null], [41242, 45041, null], [45041, 51833, null], [51833, 51833, null], [51833, 55350, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5084, true], [5084, 11359, null], [11359, 16218, null], [16218, 21301, null], [21301, 27617, null], [27617, 33338, null], [33338, 37147, null], [37147, 41242, null], [41242, 45041, null], [45041, 51833, null], [51833, 51833, null], [51833, 55350, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55350, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55350, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5084, 2], [5084, 11359, 3], [11359, 16218, 4], [16218, 21301, 5], [21301, 27617, 6], [27617, 33338, 7], [33338, 37147, 8], [37147, 41242, 9], [41242, 45041, 10], [45041, 51833, 11], [51833, 51833, 12], [51833, 55350, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55350, 0.03333]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
04043022a727c056c5e4165c0e9531e30b8246ab
[REMOVED]
{"Source-Url": "http://daselab.cs.wright.edu/resources/publications/chapterHitzlerParsia2008.pdf", "len_cl100k_base": 12235, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 66896, "total-output-tokens": 16616, "length": "2e13", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0005488395690917969, "__label__crime_law": 0.0005521774291992188, "__label__education_jobs": 0.0012826919555664062, "__label__entertainment": 0.00015234947204589844, "__label__fashion_beauty": 0.00018846988677978516, "__label__finance_business": 0.0004506111145019531, "__label__food_dining": 0.00043702125549316406, "__label__games": 0.0007495880126953125, "__label__hardware": 0.0006570816040039062, "__label__health": 0.0005402565002441406, "__label__history": 0.0003590583801269531, "__label__home_hobbies": 0.0001329183578491211, "__label__industrial": 0.00057220458984375, "__label__literature": 0.0010004043579101562, "__label__politics": 0.0004661083221435547, "__label__religion": 0.000667572021484375, "__label__science_tech": 0.10888671875, "__label__social_life": 0.0001569986343383789, "__label__software": 0.020751953125, "__label__software_dev": 0.85986328125, "__label__sports_fitness": 0.00024366378784179688, "__label__transportation": 0.0005950927734375, "__label__travel": 0.00021064281463623047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54881, 0.01891]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54881, 0.46705]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54881, 0.86648]], "google_gemma-3-12b-it_contains_pii": [[0, 1929, false], [1929, 4421, null], [4421, 6796, null], [6796, 9384, null], [9384, 12421, null], [12421, 15621, null], [15621, 18454, null], [18454, 21118, null], [21118, 23987, null], [23987, 26739, null], [26739, 29746, null], [29746, 31538, null], [31538, 34230, null], [34230, 36345, null], [36345, 39060, null], [39060, 40620, null], [40620, 42950, null], [42950, 45667, null], [45667, 49087, null], [49087, 52489, null], [52489, 54881, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1929, true], [1929, 4421, null], [4421, 6796, null], [6796, 9384, null], [9384, 12421, null], [12421, 15621, null], [15621, 18454, null], [18454, 21118, null], [21118, 23987, null], [23987, 26739, null], [26739, 29746, null], [29746, 31538, null], [31538, 34230, null], [34230, 36345, null], [36345, 39060, null], [39060, 40620, null], [40620, 42950, null], [42950, 45667, null], [45667, 49087, null], [49087, 52489, null], [52489, 54881, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54881, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54881, null]], "pdf_page_numbers": [[0, 1929, 1], [1929, 4421, 2], [4421, 6796, 3], [6796, 9384, 4], [9384, 12421, 5], [12421, 15621, 6], [15621, 18454, 7], [18454, 21118, 8], [21118, 23987, 9], [23987, 26739, 10], [26739, 29746, 11], [29746, 31538, 12], [31538, 34230, 13], [34230, 36345, 14], [36345, 39060, 15], [39060, 40620, 16], [40620, 42950, 17], [42950, 45667, 18], [45667, 49087, 19], [49087, 52489, 20], [52489, 54881, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54881, 0.01319]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
aa5968a1e21cb8c44c29bb7754f37861340920e3
Real Challenges in Mobile App Development Mona Erfani Joorabchi Ali Mesbah Philippe Kruchten University of British Columbia Vancouver, BC, Canada {merfani, amesbah, pbk}@ece.ubc.ca Abstract—Context: Mobile app development is a relatively new phenomenon that is increasing rapidly due to the ubiquity and popularity of smartphones among end-users. Objective: The goal of our study is to gain an understanding of the main challenges developers face in practice when they build apps for different mobile devices. Method: We conducted a qualitative study, following a Grounded Theory approach, in which we interviewed 12 senior mobile developers from 9 different companies, followed by a semi-structured survey, with 188 respondents from the mobile development community. Results: The outcome is an overview of the current challenges faced by mobile developers in practice, such as developing apps across multiple platforms, lack of robust monitoring, analysis, and testing tools, and emulators that are slow or miss many features of mobile devices. Conclusion: Based on our findings of the current practices and challenges, we highlight areas that require more attention from the research and development community. Index Terms—mobile app development; mobile platforms; qualitative study I. INTRODUCTION The ubiquity and popularity of smartphones among end-users has increasingly drawn software developers’ attention over the last few years. There are currently around 800,000 mobile apps on Apple’s AppStore [1] (38% of marketshare [2]), 650,000 on Android Market [3] (52%), 120,000 on Windows Marketplace [4] (3%) and 100,000 on Blackberry AppWorld [5] (6%). Recent estimations indicate that by 2015 over 70% of all handset devices will be smartphones, capable of running mobile apps [6]. As with any new domain, mobile application development has its own set of new challenges, which researchers have recently started discussing [7], [8]. However, most of these discussions are anecdotal in nature. While there are substantial qualitative studies on different areas of software engineering, to the best of our knowledge, no study has been conducted to investigate the challenges that mobile app developers face in practice. Mobile apps fall broadly into three categories: native, web-based, and hybrid [9], [10]. Native applications run on a device’s operating system and are required to be adapted for different devices. Web-based apps require a web browser on a mobile device. Hybrid apps are ‘native-wrapped’ web apps. A recent survey [11] revealed that developers are mainly interested in building native apps, because they can utilize the device’s native features (e.g., camera, sensors, accelerometer, geolocation). Therefore, in this paper we mainly focus on native apps. Henceforth, we use the term ‘mobile app’ to denote ‘native mobile application’. The goal of our study is to gain an understanding of the current practices and challenges in native mobile app development; we conducted an explorative study by following a Grounded Theory approach, which is a research methodology stemming from the social sciences [12], gaining increasing popularity in software engineering research [13]. Thus, instead of starting with predetermined hypotheses, we set our objective to discover the process and challenges of mobile app development across multiple platforms. To that end, we started by conducting and analyzing interviews with 12 senior mobile app developers, from 9 different industrial companies, who are experts in platforms such as iOS, Android, Windows Mobile/Phone, and Blackberry. Based on the outcome of these interviews, we designed and distributed an online survey, which has been completed by 188 mobile app developers worldwide. Our results reveal challenges of dealing with multiple mobile platforms during mobile development. While mobile devices and platforms are extensively moving toward fragmentation, the contemporary development process is missing the adaptation to leverage knowledge from platform to platform. Developers currently treat the mobile app for each platform separately and manually check that the functionality is preserved across multiple platforms. Furthermore, mobile developers need better analysis tools in order to track metrics for their apps during the development phase. Additionally, testing is a significant challenge. Current testing frameworks do not provide the same level of support for different platforms and current testing tools do not support important features for mobile testing such as mobility, location services, sensors, or different gestures and inputs. II. STUDY DESIGN The objective of our study is to gain an understanding of the challenges mobile app developers face in practice. A. Methodology Considering the nature of our research goal, we decided to conduct a qualitative study by following a Grounded Theory approach [12], [14]. Grounded Theory is best suited when the intent is to learn how people manage problematic situations and how people understand and deal with what is happening to them [15]. It is also useful when the research area has not been covered in previous studies [16] and the emphasis is on new theory generation [17] i.e., understanding a phenomenon. Grounded Theory is gaining increasing popularity in software engineering research [13], [15], [16], [18]–[22]. B. Data Collection and Analysis Our approach for conducting a Grounded Theory research includes a combination of interviews and a semi-structured survey. The interviews targeted experts in mobile app development and the survey was open to the general mobile development community. Our interviews were conducted in an iterative style, and they are at the core of the data collection and analysis process. At the end of each interview, we asked the interviewees for feedback on our set of questions; what is missing and what is redundant. The analytical process involves collecting, coding and analyzing data after each interview, while developing theory simultaneously. From the interview transcripts, we analyze the data line-by-line, break down interviews into distinct units of meaning (sentences or paragraphs), allocate codes to the text and label them to generate concepts to these units. Our codes, where appropriate, are taken from the text itself. Otherwise, they are created by the authors to capture the emerging concepts. Furthermore, these concepts are then clustered into descriptive categories. They are re-evaluated and subsumed into higher-order categories in order to generate an emerging theory. Theoretical sampling evolves into an ever-changing process, as codes are analyzed and categories and concepts continue to develop [19]. We perform constant comparison [12] between the analyzed data and the emergent theory until additional data being collected from the interviews adds no new knowledge about the categories. Thus, once the interviewees’ answers begin to resemble the previous answers, a state of saturation [23] is reached, and that is when we stop the interviewing process. Based on the theory emerging from the interview phase, we designed a semi-structured survey, as another source of data, to challenge this theory. Before publishing the survey and making it publicly available, we asked four external people – one senior PhD student and three mobile app developers – to review the survey in order to make sure all the questions were appropriate and easily comprehensible. Most of our survey questions are closed-ended, but there are also a few optional open-ended questions for collecting participants’ ‘insights’ and ‘experiences’. The responses to these open-ended questions are fed into our coding and analysis step to refine the results, where applicable. This survey, as distributed to participants, is available online.1 C. Participant Demographics Interviews. We interviewed 12 experts from 9 different companies. Each interview session took on average around 30 minutes. We recorded audio in the interview sessions and then transcribed them for later analysis. Table I presents each participant’s role in their company, the mobile platforms they have expertise in, the number of years they have work experience in software development and in mobile app development, the size of the mobile development team, and finally all the mobile platforms that each company supports. Regarding the participants’ experience in developing mobile app, five have around 6 years, four have 3-4 years and three have 2-3 years of experience. Five participants are mainly iOS experts, five are Android experts, one is a Windows expert, and finally one is a Blackberry expert. Survey. Our survey was fully completed by 188 respondents. We released the survey on Dec 13, 2012 to a wide variety of mobile development groups. We targeted the popular Mobile Development Meetup groups, LinkedIn groups related to native mobile development and shared the survey through our Twitter accounts. We kept the survey live for two and a half months. In our attempt to distribute our online survey, it was interesting to see people’s reactions; they liked our post on LinkedIn groups and gave encouraging comments such as “I hope it will help to make mobile app developers’ lives easier”. The demographics of the participants in the survey are as follows: they are 92% male, 5% female. They come from USA (48%), India (11%), Canada (10%), Israel (5%), The Netherlands (3%), UK (3%), New Zealand (2%), Mexico (2%), and 15 other countries. Regarding their work experience in software development, 52% have more than 10 years, 15% between 6-10 years, 20% between 2-5 years, and 13% less than 2 years. Their experience in native mobile development ranges from: 6% more than 6 years, 19% between 4-6 years, 59% have between 1-3 years, to 16% less than 1 year. The platforms they have expertise in include 72% iOS. --- 1http://www.ece.ubc.ca/~merfani/survey.pdf <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>P1</td> <td>iOS Lead</td> <td>iOS, Android</td> <td>6-10</td> <td>6</td> <td>A (20)</td> <td>iOS, Android, Windows, Blackberry</td> </tr> <tr> <td>P2</td> <td>Android Lead</td> <td>Android, iOS</td> <td>6-10</td> <td>6</td> <td>A (20)</td> <td>iOS, Android, Windows, Blackberry</td> </tr> <tr> <td>P3</td> <td>BlackBerry Lead</td> <td>BlackBerry, iOS, Android</td> <td>6-10</td> <td>6</td> <td>A (20)</td> <td>iOS, Android, Windows, Blackberry</td> </tr> <tr> <td>P4</td> <td>iOS Lead</td> <td>iOS</td> <td>6-10</td> <td>3-4</td> <td>B (2-5)</td> <td>iOS, Android</td> </tr> <tr> <td>P5</td> <td>Android Lead</td> <td>Android</td> <td>6-10</td> <td>3</td> <td>B (2-5)</td> <td>iOS, Android</td> </tr> <tr> <td>P6</td> <td>iOS Dev</td> <td>iOS</td> <td>4-5</td> <td>3-4</td> <td>C (20+)</td> <td>iOS, Android</td> </tr> <tr> <td>P7</td> <td>Windows Mobile Dev</td> <td>Windows, Android</td> <td>10+</td> <td>2</td> <td>D (1)</td> <td>Windows</td> </tr> <tr> <td>P8</td> <td>Android Dev</td> <td>Android</td> <td>4-5</td> <td>2-3</td> <td>E (2-5)</td> <td>iOS, Android</td> </tr> <tr> <td>P9</td> <td>Android Lead</td> <td>Android, iOS, Windows</td> <td>10+</td> <td>5-6</td> <td>F (6-10)</td> <td>iOS, Android, Windows</td> </tr> <tr> <td>P10</td> <td>iOS Dev</td> <td>iOS, Android</td> <td>10+</td> <td>3</td> <td>G (1)</td> <td>iOS, Android</td> </tr> <tr> <td>P11</td> <td>Android Lead</td> <td>Android, BlackBerry</td> <td>10+</td> <td>6+</td> <td>H (1)</td> <td>Android, BlackBerry</td> </tr> <tr> <td>P12</td> <td>iOS Dev</td> <td>iOS, Windows</td> <td>10+</td> <td>2-3</td> <td>I (2-5)</td> <td>iOS, Windows</td> </tr> </tbody> </table> 65% Android, 26% Windows, 13% Blackberry, and 6% chose others (e.g., Symbian, J2ME). III. FINDINGS The findings from our study consist of 4 main categories, and 25 subordinate concepts. For each concept, appropriate codes and quotes are presented in this section. In addition to the general challenges faced by mobile developers (Section III-A), two major themes emerged from the study, namely (1) challenges of developing mobile apps across multiple platforms (Section III-B), and (2) current practices (Section III-C) and challenges (Section III-D) of mobile app analysis and testing. A. General Challenges for Mobile Developers In this subsection, we present the most prominent general challenges faced by mobile app developers, emerging from our study results. Moving toward Fragmentation rather than Unification. 76% of our survey participants see the existence of multiple mobile platforms as a challenge for developing mobile apps, while 23% believe it is an opportunity for technology advances that drive innovation. More than half of the participants mentioned that mobile platforms are moving toward fragmentation rather than unification: - **Fragmentation across platforms**: Each mobile platform is different with regard to the user interface, user experience, Human Computer Interaction (HCI) standards, user expectations, user interaction metaphors, programming languages, API/SDK, and supported tools. - **Fragmentation within the same platform**: On the same platform, various devices exist with different properties such as memory, CPU speed, and graphical resolutions. There is also a fragmentation possible on the operating system level. A famous example is the fragmentation on Android devices with different screen sizes and resolutions. Almost every Android developer in both our interviews and survey mentioned this as a huge challenge they have to deal with on a regular basis. Furthermore, device fragmentation is not only a challenge for development but also for testing. All of our participants believe that platform versioning and upgrading is a major concern; For example, a respondent said: “at the OS level, some methods are deprecated or even removed”. So developers need to test their apps against different OS versions and screen sizes to ensure that their app works. Subject P5 said they mostly maintain “a candidate list of different devices and sizes”. P11 explained, “because we monitor our application from the feedback of the users, we tend to focus on testing on the devices that are most popular.” Thus, the current state of mobile platforms adds another dimension to the cost, with a wide variety of devices and OS versions to test against. P11 continued, “right now we support 5 or 6 different (app) versions only because there are different OS versions, and on each of those OS versions we also have 3-4 different screen sizes to make sure the application works across each of the Android versions.” A respondent stated, “we did a code split around version 2.3 (Android). So we have two different versions of the applications: pre 2.3 version and post 2.3 version. And in terms of our policy, we made that decision since it is too difficult to port some features”. Monitor, Analysis and Testing Support. “Historically, there has almost been no one doing very much in mobile app testing”, stated P10 and explained that until fairly recently, there has been very little testing, and very few dedicated testing teams. However, that is changing now and they have started to reach out for quality and testing. Automated testing support is currently very limited for native mobile apps. This is seen as one of the main challenges by many of the participants. Current tools and emulators do not support important features for mobile testing such as mobility, location services, sensors, or different gestures and inputs. Our results indicate a strong need of mobile app developers for better analysis and testing support. Many mentioned the need to monitor, measure, and visualize various metrics of their apps through better analysis tools. Open/Closed Development Platforms. Android is open source whereas iOS and Windows are closed source. Some participants argued that Apple and Microsoft need to open up their platforms. P5 explained: “We have real challenges with iOS, not with Android. Because you don’t have API to control, so you have to jump into loops and find a back door because the front door is locked. Whatever Apple allows is not enough sometimes.” An example of such lack of control is given: “to find out whether we are connected to the Bluetooth.” On the other hand, P9 explained that because Android is open source and each manufacturer modifies the source code to their own desires and releases it, sometimes they do not stick to the standards. A simple example is provided: “the standard Android uses commas to separate items in a list, but Samsung phones use a semicolon.” A respondent stated, “Many Android devices have been badly customized by carriers and original equipment manufacturers.” Data Intensive Apps. Dealing with data is tricky for apps that are data intensive. As a respondent explained: “So much data cannot be stored on the device, and using a network connection to sync up with another data source in the backend is challenging.” Regarding offline caching in hybrid solutions, P1 said: “Our apps have a lot of data and offline caching doesn’t seem to really work well.” Keeping Up with Frequent Changes. One type of challenge mentioned by many developers is learning more languages and APIs for the various platforms and remaining up to date with highly frequent changes within each software development kit (SDK). “Most mobile developers will need to support more than one platform at some point”, a respondent stated. “Each platform is totally different (marketplaces, languages, tools, design guidelines), so you need experts for every one of them. Basically, it is like trying to write simultaneously a book in Japanese and Russian; you need a native Japanese and a... native Russian, or quality will be ugly”, explained another respondent. As a result, learning another platform’s language, tools, techniques, best practices, and HCI rules is challenging. Many developers complained about the lack of an integrated development environment that supports different mobile platforms. An exception was P1 who explained: “Right now we develop in two main platforms: iPhone and Android. That is not really that hard, the native SDKs are pretty mature and they are easy to learn.” B. Developing for Multiple Platforms 67% of our interview participants and 63% of our survey respondents have experienced developing the same app for more than one mobile platform. Native vs. Hybrid Mobile Apps. Subjects P1 and P8 support developing hybrid apps. The remaining 10 interviewees are in favour of building pure native apps and believe that the current hybrid model tends to look and behave much more like webpages than mobile applications. P11 argued that “the native approach offers the greatest features” and P4 stated “user experience on native apps is far superior [compared] to a web app.” In a number of cases the participants had completely moved away from the hybrid to the native approach. A recurring example given is Facebook’s recent switch from an HTML5-based mobile app to a native one. On the other hand, P1 argued that “it really depends on the complexity and type of the application”, for example, “information sharing apps can easily adopt the hybrid model to push news content and updates across multiple platforms.” In the survey, 82% responded having native development experience, 11% have tried hybrid solutions, and 7% have developed mobile web apps. Most respondents are in favour of the native approach: “Mobile web doesn’t feel or look like any of the platforms.” Others said that: “HTML5 has much potential and will likely address many of the current problems in the future as it saves development time and cost”; or: “Since many big players are investing a lot on HTML5, it may take a big chunk of the front-end side when it becomes stable.” Most of the participants argued that when development cost is not an issue, companies tend to develop native apps. Of course it also depends on the application type: where better user experience or device specific features are needed, native seems to be the clear choice. Lastly, when we asked our participants that whether native app development will be replaced by hybrid solutions or mobile web development due to its challenges, all the interviewees and 70% of survey participants disagreed, and 10% indicated that there will always be a combination of native and hybrid approaches. Limiting Capabilities of a Platform’s Devices. Not all devices and operating systems of a platform have the same capabilities. For instance, Android has different versions and devices and operating systems of a platform have the same running on. However, due to the internal differences in various mobile devices and operating systems, “a generic design for all platforms does not exist”; For instance, P12 stated that “an Android design cannot work all the way for the iPhone.” This is mainly due to the fact that HCI guidelines are quite different ![Fig. 1: Have you developed the same native mobile app across different platforms?](image-url) with mature web browsers in the platforms), there would be more interest from the community for hybrid development. Reusing Code vs. Writing from Scratch. 67% of our interview participants have tried both methods of writing a native mobile app from scratch for a different platform and reusing some portions of the same code across platforms. The majority stated that it is impossible or challenging to port functionality across platforms and that when code is reused in another platform, the quality of the results is not satisfactory. Figure 1 shows that out of the 63% survey respondents, who have experienced developing mobile apps across different platforms, 34% have written the same app for each platform from scratch, and 20% have experienced porting some of the existing code. A respondent said, “every platform has different requirements for development and porting doesn’t always produce quality”; or: “At this moment, I believe that it is best to create the apps from scratch targeting the individual OS.” P11 argued that “we ported a very little amount of the code back and forth between Android and Blackberry, but we typically write the code from scratch. While they both use Java, they don’t work the same way. Even when basic low levels of Java are the same, you have to rewrite the code.” In addition to the differences at the programming language level (e.g., Objective-C versus Java), P9 elaborated why migrating code does not work: “A simple example is the way they [platforms] process push messages. In Android, a push message wakes up parts of the app and it requests for CPU time. In iOS the server would pass the data to Apple push server. The server then sends it to the device and no CPU time to process the data is required.” These differences across platforms force developers to rewrite the same app for different platforms, with no or little code reuse. This is seen as one of the main disadvantages of native app development. Behavioural Consistency versus Specific HCI Guidelines. Ideally, a given mobile app should provide the same functionality and behaviour regardless of the target platform it is running on. However, due to the internal differences in various mobile devices and operating systems, “a generic design for all platforms does not exist”; For instance, P12 stated that “an Android design cannot work all the way for the iPhone.” This is mainly due to the fact that HCI guidelines are quite different. across platforms, since no standards exist for the mobile world, as they do for the Web for instance. Thus, developers are constantly faced with two competing requirements: - **Familiarity for platform users**: Each platform follows a set of specific HCI guidelines to provide a consistent look-and-feel across applications on the same device. This makes it easier for end users to navigate and interact with various applications. - **Behavioural consistency across platforms**: On the other hand, developers would like their application to behave similarly across platforms, e.g., user interaction with a certain feature on Blackberry should be the same as on iPhone and Android. Thus, creating a reusable basic design that will translate easily to all platforms while preserving the behavioural consistency is challenging. As P9 stated: “The app should be re-designed per platform/OS to make sure it flows well”; A respondent put it: “We do screen by screen design review for each new platform”; or: “Different platforms have different strengths and possibilities. It is foolish to try to make the apps exactly the same between platforms”; and: “It requires multi-platform considerations at the designing stage and clever decisions should be made where platform-specific design is necessary.” **Time, Effort, and Budget are Multiplied.** Due to the lack of support for automated migration across platforms, developers have to redesign and reimplement most of the application. Therefore, creating quality products across platforms is not only challenging, but also time consuming and costly, i.e. “developing mobile apps across platforms natively is like having a set of different developers per each platform”, stated P11. As a result, “re-coding against wildly different API sets” increases the cost and time-to-market within phases of design, development, testing, and maintenance, which is definitely a large issue for start-up and smaller companies. **C. Current Testing Practices** As outlined in Subsection III-A, many developers see analysis and testing of mobile apps as an important activity to provide dependable solutions for end-users. Our study results shed light on the current practices of mobile application analysis and testing. **Manual Testing is Prevalent.** As shown in Figure 2, 64% of our survey participants test their mobile apps manually, 31% apply a hybrid approach, i.e., a combination of manual and automated testing, and only 3% engage in fully automated testing. P3 explained: “Right now, manually is the best option. It’s kind of like testing a new game, testing on consoles and devices. It is that kind of testing I believe just maybe smaller, but you have to worry about more platforms and versions.” A respondent stated: “Organizations, large and small, believe only in manual testing on a small subset of devices”; and another one said: “It’s a mess. Even large organizations are hard to convince to do automated testing.” **Developers are Testers.** There are different combinations of testing processes and approaches currently taken by the industry. They can be categorized based on a company’s size, clients, development culture, testing policy, application type, and the mobile platforms supported. These testing approaches are performed by various people such as developers, testing teams, beta testers, clients, as well as third party testing services. As indicated in Table I, our interviewees’ companies vary from small size with 1–2 developers to larger mobile development companies or teams with over 20 developers. As expected, larger companies can afford dedicated testing teams, while in smaller companies testing is mainly done by developers or clients (end-users). Figure 3 depicts the results of our survey with regard to roles responsible for testing. 80% of the respondents indicated that the developers are the testers, 53% have dedicated testing teams or testers, and 28% rely on beta testers. The majority of the participants, with or without testing teams, stated that after developing a new feature, the developers do their own testing first and make sure it is functional and correct. This is mostly manual testing on simulators and if available on physical devices. **Test the App for Each Platform Separately.** Our interviews reveal that our participants treat each platform completely separately when it comes to testing. Currently, there is no coherent method for testing a given mobile app across different platforms; being able to handle the differences at the UI level is seen as a major challenge. Testers write “scripts that are specific for each platform”, and they “are familiar with the functionality of the app, but are testing each platform separately and individually”. We also notice that there are usually separate teams in the same company, each dedicated to a specific platform with their own set of tools and techniques; P6, an iOS developer, said: “I am not sure about Android, as the teams in our company are so separate and I don’t even know what is going on with the other side.” Responses provided by 63% of our survey participants, who develop the same native mobile app for more than one platform, confirmed the interview results, stating: “The test cases apply to each platform, but they must be implemented uniquely on each platform”, or: “Same as for one platform, but multiple times”, and: “I have to do it twice or more depending on how many platforms I have to build it on”, or: “Treat them as separate projects, as they essentially are, if native. Do testing independently.” Levels of Testing. Figure 4 illustrates different levels of testing applied on mobile apps. There is very little automation for different levels of testing, e.g., around 3% for each of GUI, acceptance, and usability testing. P2 noted: “It is not really well structured or formal what we do. We do some pieces of all of them but the whole testing is a manual process.” GUI Testing. More than half of the participants admitted that GUI testing is challenging to automate. P2 said: “Automated UI testing is labor intensive, and can cause inertia when you want to modify the UI. We have a manual tester, core unit testing, then employ beta field testing with good monitoring.” P7 stated: “Our company has Microsoft products. With Microsoft studio interface you can emulate a lot of sensors for testing GUI where as in Eclipse for Android, you need to click a lot of buttons. You can emulate the position in your phone, but Android doesn’t do this.” P3 elaborated: “Blackberry is actually really hard to create test scripts for GUI testing. Because it is not like other platforms, which are touch-based and layout-based. With Blackberry, you have to know what field manager is and it is hard to actually get this information by clicking on buttons. You have to go through the whole array of elements.” Some tools were highlighted such as ROBOTIUM [24] and MONKEYRUNNER [25] for Android. A few iOS developers said they have tried MONKEYTALK (formerly called FONEMONKEY) [26] and KIF [27] for GUI testing; P1 stated: “I find KIF to be a lot more mature than the testing tools provided by Apple but it is still hard to be used for our custom and dynamic applications.” Unit Testing. Our study shows that the use of unit testing in the mobile development community is relatively low. Both interview and survey results (See Figure 4) reveal that unit testing for native mobile app is not commonplace yet. On the one hand, some respondents argued that “the relatively small size of mobile apps makes unit testing overkill”; or: “Deciding whether it’s worth writing unit tests or save the time and test manually is always difficult”; and: “Complete unit testing to get full coverage is overkill. We only unit test critical code”; or: “Small projects with small budgets - the overhead of creating rigorous test plans and test cases would have a serious impact on the budget.” On the other hand, others said that “the rapidly changing user expectations and technology means unit testing is crucial.” Our interviewees believe that having a core script for generic features is the best approach in the long term. P12 said: “Unit tests are still the best. They are easy to run, and provide immediate feedback when you break something.” Unit testing seems to be more popular among Android and Windows developers, using JUnit and NUnit, respectively. Two iOS participants have tried writing unit tests for iPhone using SENTESTINGKITFRAMEWORK [28], a built-in Xcode tool, as well as XCODE INSTRUMENTS [29]. P1 stated: “iOS apps are not really built to be unit tested”, P12 argued: “iOS doesn’t make it easy to have test automation” and a respondent said: “Apple’s Developer testing tools don’t play well.” Beta Testers and Third Party Testing Services. Beta testing, mostly with TESTFLIGHT [30], seems to be quite popular in mobile app development; although P5 emphasized that “the beta testers are in the order of dozens not thousands.” TestFlight automates parts of the process, from deploying the app to collecting feedback. Further, there are many cases in which the clients are responsible for testing, i.e., recruiting beta testers or acceptance testing. P6 explained that they have internal and external client tracking systems: “Basically we have two bug tracking systems, internal and client tracking system (external). The client create bugs in that tracking system and our testing team try to reproduce bugs to see if it is a valid and reproducible bug. If so they duplicate it in our internal tracking system. Then developers will look at it again.” Additionally, some developers rely on third party testing services such as PERFECTOMOBILE [31] and DEVICEANYWHERE [32]. However, “it is usually too volatile and the tools in many cases support very simple apps. Honestly not really worth the effort”, said one of our interviewees. Other participants’ attitudes toward testing services are varied; P12 argued: “Services should be affordable, and not just report bugs but also provide some documents that indicate how people test the application, and give a high level overview of all the paths and possibilities that are tested.” Another respondent said: “Most online testing services charge a very hefty premium even for apps that are distributed for free”; and: “It is nice to test an app by a third party, someone who is not the developer. At the same time, just random testing doesn’t do the trick. You need to have a more methodical approach but the problem with methodical approaches is that they turn the price up.” P11 said: “We don’t want to lock in on one specific vendor and tend to use open-source tools, such as JUnit.” Another problem mentioned is that “if we want to change something the way we want to, we don’t have access to the source code. So we can’t change the services of the framework.” D. Analysis and Testing Challenges In this subsection, we present the challenges experienced, by our interview participants and survey respondents, for analyzing and testing native mobile apps. Limited Unit Testing Support for Mobile Specific Features. Although JUnit is used by more than half of the Android participants, many also point out that “JUnit is designed for stationary applications and it has no interface with mobile specifics such as sensors (GPS, accelerometer, gyroscope), rotation, navigation”. As a result, “there is no simple way to inject GPS positions, to rotate the device and verify it that way”. P11 explained: “we are creating a ‘map application’, which requires users typically being out doors, moving around and navigating, which is not supported by current testing tools.” Writing mobile specific test scenarios requires a lot of code and is time consuming and challenging. A number of participants indicated that having “a JUnit type of framework with mobile specific APIs and assertions” would be very helpful. Monitoring and Analysis. Both our interview and survey data indicate a strong need of mobile app developers for better analysis and monitoring support. Many mentioned the need to monitor, measure, and visualize various metrics of their apps such as memory management (to spot memory leaks), battery usage (to optimize battery life), CPU usage, pulling/pushing data, and network performance (over various networks, e.g., 2G, 3G, 4G and wireless connections) through better analysis tools. “A visualization tool such as those hospital monitoring devices with heart rate, blood pressure, etc., would help to gain a better understanding of an app’s health and performance”, explained P8. Handling Crashes. One major problem mentioned in mobile app testing is about crashes, which are often intermittent, non-deterministic, and irrecoverable. It is challenging for developers to capture enough information about these crashes to analyze and reproduce them [33], so that they can be fixed. Many developers in our study found it helpful to have a set of tools that would enable capturing state data as a crash occurs and creating a bug report automatically. P5 stated: “Dealing with the crashes that are very hard to catch and harder to reproduce is an issue. It would be good that when the crashes happen, system logs and crash logs can be immediately captured and sent to developer over the phone.” Emulators/Simulators. Emulators are known to mimic the software and hardware environments found on actual devices whereas simulators only mimic the software environment. Many mobile developers believe that better support is needed to mimic real environments (e.g., network latency, sensors) for testing. Another issue mentioned is that rooted simulators and emulators are needed in order to access features outside of the application, such as settings, play store, Bluetooth and GPS, which could be part of a test case. Also, performance of emulators is a key factor mentioned by many of our participants. Compared to iOS Simulator, “Android emulator is very slow. I use my device for testing instead”, said P8. Missing Platform-Supported Tools. Almost all of the participants mentioned that current tools are weak and unreliable with no or limited support for important features for mobile testing such as mobility, location services, sensors and different inputs. They have experienced many automation failures, or many cases where testing tools actually slowed the development process down substantially. Some of our participants stated that platform-supported tools are needed, e.g., “unit testing should be built-in”. A respondent said: “the platforms have to support it (testing). 3rd party solutions will never be good enough.”, and another one said they need “strong integrated development environment support”. Some noted that the process will be similar to that for web applications, “it took years to create powerful tools for analyzing and testing web apps, and we are still not there completely”. Rapid Changes Over Time. Our interview reveals that requirements for mobile app projects change rapidly and very often over time. This is the reason our participants argued that they have difficulties to keep the testing code up to date. A respondent said: “changing requirements means changing UI/logic, so GUI and integration tests must be constantly rewritten.” P1 stated: “there are some testing tools out there, but we don’t use any of them because we can’t keep the tests updated for our dynamic apps”. P10 stated that due to rapid changes they have “time constraints for creating test scripts and performing proper testing.” Many Possibilities to Check. An issue mentioned by more than half of the participants is the fact that there are so many different possibilities to test, and places that could go potentially wrong on mobile apps. Thus, “it is difficult to identify all the usage scenarios and possible use cases while there is a lot of hidden states; for example, enabling/disabling for the location services, and weak and strong network for network connectivity”. P12 finds: “The relation between apps should be well managed, you might be interrupting other apps, or they might be interrupting yours”. P12 provides an example: “manage the states when an audio recording app goes into background.” Furthermore, a participant argued that based on missing or misleading usage specifications, they should avoid under-coverage (missing test cases) and over-coverage (waste of time and human resources for testing situations that won’t happen in the real world). **App Stores and Usability Testing.** Developers have to follow mobile app stores’ (e.g., AppStore, Google Play) requirements to distribute their apps to end users. These requirements change often, and developers need a way to test their apps’ conformance. “I would like to have something more robust for me to mimic what the publisher (store) will be doing so that I can catch the error earlier in the development process,” said a respondent. Additionally, “pushing the app to individual devices is more complex than necessary”, for instance in iPhone. At the same time, end-users nowadays have the ability to collectively rank apps on the stores. If users like an app, they download and start using it. If not, they delete it and move on immediately. If they really like it, they rank it high; Or if they really dislike it, they go on social media and complain. Thus, a low-quality release can have devastating consequences for mobile developers. As a result, there is a huge emphasis on usability testing. As P8 explained, “definitely one of our big challenges is usability testing, which is manual. I do heuristic evaluations personally and then evaluate with real users”. P11 elaborated: “Our usability testing encompasses a large portion of not only features but also UI.” ### IV. Threats to Validity Similar to quantitative research, qualitative studies could suffer from threats to validity, which is challenging to assess as outlined by Onwuegbuzie et al. [34]. For instance, in codification, the researcher bias can be troublesome, skewing results on data analysis [21]. We tried to mitigate this threat through triangulation; The codification process was conducted by two researchers, one of whom had not participated in the interviews, to ensure minimal interference of personal opinions or individual preferences. Additionally, we conducted a survey to challenge the results emerging from the interviews. Both the interview and survey questionnaire were designed by a group of three researchers, with feedback from four external people – one senior PhD student and three industrial mobile app developers – in order to ensure that all the questions were appropriate and easily comprehensible. Another concern was a degree of generalizability. We tried to draw representative mobile developer samples from nine different companies. Thus, the distribution of participants includes different companies, development team sizes, platforms, application domains, and programming languages – representing a wide range of potential participants. Of course the participants in the survey also have a wide range of background and expertise. All this gives us some confidence that the results have a degree of generalizability. One risk within Grounded Theory is that the resulting findings might not fit with the data or the participants [12]. To mitigate this risk, we challenged the findings from the interviews with an online survey, filled out by 188 practitioners worldwide. The results of the survey confirmed that the main concepts and codes, generated by the Grounded Theory approach, are in line with what the majority of the mobile development community believes. Lastly, in order to make sure that the right participants would take part in the survey, we shared the survey link with some of the popular Mobile Development Meetup and LinkedIn groups related to native mobile app development. Furthermore, we did not offer any financial incentives nor any special bonuses or prizes to increase response rate. ### V. Discussion We discuss some of the challenges that are worth further investigation by the research and development community. **Same App across Multiple Platforms.** A prominent challenge emerging from our study is the fact that developers have to build the same native app for multiple mobile platforms. Although developing for multiple platforms is a recurring problem that is not unique to the mobile world, the lack of proper development and analysis support in the mobile environment exacerbates the challenges. Opting for standardized cross-platform solutions, such as HTML5, seems to be the way to move forward. However, HTML5 needs to be pushed towards maturation and adoption by major mobile manufactures, which in turn can mitigate many of the cross-platform development problems. Another possible direction to pursue is exploring ways to declaratively construct [35] native mobile applications, by abstracting the implementation details into a model, which could be used to generate platform-specific instances. **Checking Consistency across Platforms.** Another related challenge is checking the correctness and consistency of the app across different platforms. One way to tackle this problem is by constructing tools and techniques that can automatically infer interaction models from the app on different platforms. Our recent work reverse engineers a model of iOS applications [36]. Similarly, others [37], [38] are looking into Android apps. The models of the app, generated from different platforms, can be formally compared for equivalence on a pairwise-basis [39] to expose any detected discrepancies. Such automated techniques would drastically minimize the difficulty and effort in consistency checking, since many mobile developers manually “do screen by screen design review for each new platform”. **Testing Apps for Multiple Platforms.** Regarding the testing challenges, follow-up studies could focus on generating test cases for mobile apps. A centralized automatic testing system that generates a (different) test case for each target platform could be a huge benefit. While platform-specific features can be customized, core features could share the same tests. Thus, further research should focus on streamlining application development and testing efforts regardless of the mobile platform. **Testing APIs from App Stores.** Mobile developers need better and easier ways of checking their apps’ conformance to app stores’ guidelines. Currently, after a submission, sometimes they have to wait for considerable amounts of time to receive feedback from the stores. In order to catch the inconsistencies of their code with a store’s guidelines and internal APIs earlier, it would be beneficial if the stores provided a set of testing APIs (e.g., as services), which developers could use to check their code against, before submitting to the stores. **Testing Mobile Specific Features.** The existing testing frameworks have serious limitations for testing mobile specific features and scenarios such as sensors (GPS, Accelerometer, gyroscope), rotation, navigation, and mobility (changing network connectivity). As a consequence developers either need to write much test fixture code to assert mobile specific scenarios or opt for manual testing. Thus, creating “a JUnit type of framework with mobile specific APIs and assertions” would be really beneficial. **Other Challenging Areas.** There are also serious needs for (1) rooted emulators that can mimick the hardware and software environments realistically; (2) better analysis tools, in order to measure and monitor different metrics of the app under development; (3) techniques that would help debugging apps by capturing better state data when unexpected crashes occur. VI. RELATED WORK We categorize related work into two classes: mobile application development and ground theory studies in software engineering. **Mobile application development.** There have been a number of studies [40]–[44] analyzing different web-based or hybrid mobile app development frameworks. For instance, Palmieri et al. [40] report a comparison between four different cross-platform tools (RHODES, PHONEGAP, DRAGONRAD and MOSYNC) to develop applications on different mobile OSs. Huy et al. [45] studied and analyzed four types of mobile applications, namely, native, mobile widgets, mobile web, and HTML5. Masi et al. [9] propose a framework to support developers with their technology selection process for the development of a mobile application, which fits the given context and requirements. Researchers have recently started discussing [7], [8] some of the challenges involved in mobile app development. However, most of these discussions are anecdotal in nature. Our study, on the other hand, aims at understanding the challenges by interviewing and surveying mobile developers in the field. **Grounded theory studies in software engineering.** Many researchers have used a grounded theory approach in qualitative software engineering studies [15]–[22], [46]–[48] in order to understand software development practices and challenges of industrial practitioners [13]. Adolph et al. [15] use grounded theory in a field study to understand how people manage the process of software development to “get the job done”. Greiler et al. [18] conduct a grounded theory study to understand the challenges involved in Eclipse plug-in testing. The outcome of their interviews with 25 senior practitioners and a structured survey of 150 professionals provides an overview of the current testing practices, a set of barriers to adopting test practices, and the compensation strategies adopted because of limited testing by the Eclipse community. Coleman et al. [17], [19] adopt the grounded theory methodology to report on the results of their study of how software processes are applied in the Irish software industry. The outcome is a theory that explains when and why software process improvement is undertaken by software developers. Through a grounded theory approach, Sulayman et al. [20] perform interviews with 21 participants representing 11 different companies, and analyze the data qualitatively. They propose an initial framework of key software process improvement success factors for small and medium Web companies. Wiklund et al. [49] report a case study on factors that contribute to inefficiencies in use, maintenance, and development of automated testing. Kasurinen et al. [48] discuss the limitations, difficulties, and improvement needs in software test automation for different types of organizations. They surveyed employees from 31 software development organizations and qualitatively analyzed 12 companies as individual cases. They found that 74% of surveyed organizations do not use test automation consistently. To the best of our knowledge, our work is the first to report a qualitative field study targeting mobile app development practices and challenges. VII. CONCLUSIONS Our study has given us a better, more objective understanding of the real challenges faced by the mobile app developers today, beyond anecdotal stories. Our results reveal that having to deal with multiple mobile platforms is one of the most challenging aspects of mobile development. Since mobile platforms are moving toward fragmentation rather than unification, the development process cannot leverage information and knowledge from a platform to another platform. When the ‘same’ app is developed for multiple platforms, developers currently treat each platform separately and manually check that the functionality is preserved across multiple platforms. Also, creating a reusable user-interface design for the app is a trade-off between consistency and adhering to each platform’s standards. Our study also shows that mobile developers need better analysis tools to measure and monitor their apps. Also, testing is a huge challenge currently. Most developers test their mobile apps manually. Unit testing is not common within the mobile community and current testing frameworks do not provide the same level of support for different platforms. Additionally, most developers feel that current testing tools are weak and unreliable and do not support important features for mobile testing such as mobility (e.g., changing network connectivity), location services, sensors, or different gestures and inputs. Finally, emulators seem to lack several real features of mobile devices, which makes analysis and testing even more challenging. ACKNOWLEDGEMENTS We are grateful to all the participants of our study (interviews and the survey). This work was supported in part by the Institute for Computing, Information and Cognitive Systems (ICICS) at the University of British Columbia (UBC). REFERENCES
{"Source-Url": "http://www.ece.ubc.ca/~amesbah/docs/mona-esem13.pdf", "len_cl100k_base": 10686, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33502, "total-output-tokens": 13656, "length": "2e13", "weborganizer": {"__label__adult": 0.0004286766052246094, "__label__art_design": 0.00029349327087402344, "__label__crime_law": 0.0002887248992919922, "__label__education_jobs": 0.0013599395751953125, "__label__entertainment": 4.9233436584472656e-05, "__label__fashion_beauty": 0.0002002716064453125, "__label__finance_business": 0.00029158592224121094, "__label__food_dining": 0.00032138824462890625, "__label__games": 0.0006847381591796875, "__label__hardware": 0.0009870529174804688, "__label__health": 0.0003044605255126953, "__label__history": 0.0002053976058959961, "__label__home_hobbies": 6.42538070678711e-05, "__label__industrial": 0.00026679039001464844, "__label__literature": 0.0002111196517944336, "__label__politics": 0.0002617835998535156, "__label__religion": 0.00038695335388183594, "__label__science_tech": 0.00237274169921875, "__label__social_life": 7.355213165283203e-05, "__label__software": 0.003215789794921875, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00029778480529785156, "__label__transportation": 0.00045561790466308594, "__label__travel": 0.00017976760864257812}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60075, 0.0241]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60075, 0.12891]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60075, 0.92912]], "google_gemma-3-12b-it_contains_pii": [[0, 5268, false], [5268, 11178, null], [11178, 17245, null], [17245, 23034, null], [23034, 27703, null], [27703, 33196, null], [33196, 39470, null], [39470, 45669, null], [45669, 51757, null], [51757, 60075, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5268, true], [5268, 11178, null], [11178, 17245, null], [17245, 23034, null], [23034, 27703, null], [27703, 33196, null], [33196, 39470, null], [39470, 45669, null], [45669, 51757, null], [51757, 60075, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60075, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60075, null]], "pdf_page_numbers": [[0, 5268, 1], [5268, 11178, 2], [11178, 17245, 3], [17245, 23034, 4], [23034, 27703, 5], [27703, 33196, 6], [33196, 39470, 7], [39470, 45669, 8], [45669, 51757, 9], [51757, 60075, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60075, 0.07107]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
23fdc51d9ee50d788502765d2e67942e5f658953
Extending Source Code Pre-Trained Language Models to Summarise Decompiled Binaries Al-Kaswan, Ali; Ahmed, Toufique; Izadi, Maliheh; Sawant, Anand Ashok; Devanbu, Premkumar; van Deursen, Arie DOI 10.1109/SANER56733.2023.00033 Publication date 2023 Document Version Final published version Published in Proceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Green Open Access added to TU Delft Institutional Repository 'You share, we take care!' - Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. Abstract—Binary reverse engineering is used to understand and analyse programs for which the source code is unavailable. Decompilers can help, transforming opaque binaries into a more readable source code-like representation. Still, reverse engineering is difficult and costly, involving considering effort in labelling code with helpful summaries. While the automated summarisation of compiled code can help reverse engineers understand and analyse binaries, current work mainly focuses on summarising source code, and no suitable dataset exists for this task. In this work, we extend large pre-trained language models of source code to summarise de-compiled binary functions. Furthermore, we investigate the impact of input and data properties on the performance of such models. Our approach consists of two main components; the data and the model. We first build CAPYBARA, a dataset of 214K decompiled function-documentation pairs across various compiler optimisations. We extend CAPYBARA further by removing identifiers, and deduplicating the data. Next, we fine-tune the CodeT5 base model with CAPYBARA to create BinT5. BinT5 achieves the state-of-the-art BLEU-4 score of 60.83, 58.82 and, 44.21 for summarising source, compiled, and obfuscated compiled code, respectively. This indicates that these models can be extended to decompiled binaries successfully. Finally, we found that the performance of BinT5 is not heavily dependent on the dataset size and compiler optimisation level. We recommend future research to further investigate transferring knowledge when working with less expressive input formats such as stripped binaries. Index Terms—Decomposition, Binary, Reverse Engineering, Summarization, Deep Learning, Pre-trained Language Models, CodeT5, Transformers I. INTRODUCTION Reverse engineering binary programs has many applications, in particular, software security [1]. Binary reverse engineering is a hard task, requiring highly skilled reverse engineers [1, 2]. Disassemblers and decompilers can help in this process. Disassemblers transform the binary into a low-level intermediate representation, and decompilers lift the representation to a high-level programming language-like representation. But the output of decompilers is still difficult to read and understand [1, 3]. Much of the work that goes into reverse engineering a binary is spent labelling functions with semantic descriptions [1]. Current approaches [4–10] mainly focus on recovering aspects lost in the compilation and decompilation process, such as names and types. Existing works fail to address the inherent difficulties in binary code comprehensibility, namely, the need for a high-level overview of the code. For source code, methods exist to automatically generate summaries from code [11, 12]. Source code summarisation is used to automatically generate short natural language descriptions of code, which support program comprehension and aid maintenance [12, 13]. While these methods have been successfully applied to programming languages such as Python, Java and PHP [14–16], using pre-trained language models [14–16], none of these methods has been applied to the relatively syntactically-poor output of decompilers (see Figures 1a and 1b). Being able to quickly determine the context and application of a function, can save valuable analysis time, and greatly benefit reverse engineers. Function and variable names alone, are inadequate representations of the source code [12], which is why having descriptive summaries of binaries is desirable. Following [17], source code can be described as having two information channels: the algorithmic channel and the natural language channel. The algorithmic channel specifies the execution of a program (semantics), while the natural language channel explains the purpose and context of the program to humans [17]. The natural channel includes function and variable names, code comments and the specific human-readable structure of programs. Processors only consider the algorithmic channel to execute a program, while humans use both the algorithmic channel and the natural channel to understand a piece of code [17]. Furthermore, code is very regular and predictable, even more so than natural languages [18]. The compilation process, which transforms readable code into executable binaries, removes much of the information contained in the natural channel. Especially stripped binaries — binaries of which the symbol table is removed — are challenging, since they have almost no identifiers at all as can be observed in Figure 1c. The goal of this paper is to advance the field of binary reverse engineering by exploring the application of code summarisation to decompiled binaries by taking advantage of source code pre-trained language models. However, there exists no dataset of aligned binaries and source code summaries since this is a new and unexplored task. As pointed out by LeClair and McMillan, the lack of standardised datasets is a major barrier to ongoing research, which we will address for this task [19]. In this paper, we create a dataset containing pairs of decompiled and stripped-decompiled functions and summaries of these functions. During the creation of this dataset, we conform to the current best practices for dataset construction [19, 20]. We apply this dataset to an existing pre-trained language model using transfer learning, by fine-tuning this pre-trained model on our dataset. For this task, we selected a pre-trained CodeT5 model, which was only trained on source code [14]. We perform experiments on this model to explore the impact of decompilation, and the importance of identifiers. Furthermore, we explore the impact of compiler optimisation levels, the dataset size and the level of duplication. Our findings are that the decompilation and alignment of stripped functions has a very high failure rate; and the resulting stripped model has low performance. But, we found that the model shows state-of-the-art performance with both decompiled code as well as semi-stripped stripped code, code of which the identifiers were removed after decompilation. Our experiments on data duplication and dataset size further show that these models can be trained with few data, and that while duplicates have a high impact on performance, their presence is not paramount to model performance. Our key result: language models pre-trained on source code can be fine-tuned on binaries, opening up a range of new possibilities for the automated analysis of binaries. To summarise, the main contributions of this paper are: * CAPYBARA1, a dataset of Combined Aligned de-compiled Binary code And Related Annotations. A novel dataset of aligned, C, decompiled, stripped-decompiled and semi-stripped summary pairs2 (Section III); * BinT53, a Binary summarisation CodeT5 model, a simple and straightforward adaptation of a source code trained code summarisation model to decompiled code using CAPYBARA (Section IV); * An empirical investigation on the impact of the properties of decompiled code and the properties of CAPYBARA (Sections V and VI). The materials, including the processed and raw data1, the trained model checkpoints and steps to replicate our experiments1, are openly available in our replication package4. In this section, we introduce the background of compilers, binary reverse engineering, transfer learning and the code summarisation task. A. Compilers and Optimisation Levels Compilers are programs that convert source code from one programming language to another, but generally, and in the context of this work, the term is used to refer to programs that translate high-level code, like C, to a lower-level language such as machine code or bytecode. For our work, we focus on the GNU Compiler Collection (GCC)5 and Clang/LLVM (Clang)6. Compilers feature optimisation levels. Generally, the goal of optimisations is the improvement of runtime performance or program size at the expense of compilation time and the ability to debug [21]. By default, if GCC is invoked without any optimisation options, the program will be compiled with -O0. -O1, -O2 and -O3 incrementally apply more optimisation to the binary at the expense of a higher compilation time [22]. Optimisations can restructure and transform the program in relation to the source code, by changing the control flow or the data of the program [23]. This obfuscation can complicate the reverse engineering process by reducing the accuracy of tools [23]. B. Ghidra Ghidra7 is a free and open-source reverse engineering toolkit developed by the US National Security Agency. Ghidra contains many separate analysis modules that allow a reverse engineer to analyse binaries. Ghidra features a disassembler, which assembles binaries back into an intermediate representation. In the case of x86-x64 binaries like the binaries this work focuses on, the intermediate representation will be the Assembly language. The decompiler, on the other hand, is a processor language-agnostic transformation engine that takes the disassembled code and creates a source code representation, namely pseudo-C. Pseudo-C follows the general language conventions of C, but it cannot be compiled. Observe the relatively simple struct function from creytiv/re8 shown in Figure 1a. We compile the project using the -O3 compiler level as defined in the project. We decompile the binaries using Ghidra’s decompiler using the standard configuration, the resulting pseudo-code is shown in Figure 1b. We observe that aside from the function name, almost the entire natural channel has been destroyed by the compilation and decompilation process. The parameter and variable names are gone, any documentation is removed and the relatively simple logic has been unrolled to a much more difficult-to-understand representation. Ghidra also incorrectly labelled many of the variable types and failed to identify the struct datatype. --- 1CAPYBARA: https://doi.org/10.5281/zenodo.7229809 2Decompiled code with strip-like obfuscation applied 3BinT5: https://doi.org/10.5281/zenodo.7229913 4Replication package: https://github.com/AISE-TUdelft/Capybara-BinT5 5GCC: https://gcc.gnu.org/ 6Clang: https://clang.llvm.org/ 7Ghidra: https://ghidra-sre.org/ 8re: https://github.com/creytiv/re --- 261 Authorized licensed use limited to: TU Delft Library. Downloaded on November 14,2023 at 09:27:47 UTC from IEEE Xplore. Restrictions apply. The strip utility as implemented in Unix and Unix-like operating systems includes a strip utility. This utility is often stripped to reduce the memory and storage footprint of the binaries, and to resist analysis to obfuscate the underlying code and to resist analysis [24]. Commercial off-the-shelf software is often stripped to reduce the memory and storage footprint of the binaries, and to resist analysis to protect the intellectual property of the creator. Many vulnerable and malicious binaries are, unfortunately, also stripped to resist security analysis and hide their faults [5]. Unix and Unix-like operating systems include a strip utility. The strip utility removes any operands that are not necessary for the execution of the binary while ensuring that the execution of the binary remains unchanged. The exact implementation and what constitutes unnecessary operands are left to the implementor. The strip utility as implemented in GNU/Linux removes the symbol table from the binary. The symbol table contains each symbol’s location, type and name. Like higher optimisation levels, the use of stripping can greatly complicate the efforts to reverse engineer a binary, as well as reduce the accuracy and effectiveness of reverse engineering tools [24]. For example, we compile, strip and decompile the function in Figure 1a, and the resulting stripped decompiled function is shown in Figure 1c. In addition to the details lost by the decompilation process, the stripper removed all symbols, like the function names. D. Code Summarisation Task: Code summarisation (also referred to as source code summarisation) is the task of writing short descriptions from source code, usually a single-sentence summary of the source code. The main use is for software documentation, like the one-sentence JavaDoc description used in Java [19]. This documentation is important for program comprehension and maintenance. But the process of writing and maintaining these descriptions is a labour-intensive and time-consuming task, which is where the benefits of automating that process arise. Automatic code summarisation is an active and popular research problem in the field of software engineering [19]. E. Transformer-based Models Transformers were originally proposed by Vaswani et al. as a sequence-to-sequence architecture [25]. Unlike the Recurrent Neural Networks [26] (RNN), the Long Short-Term Memory [27] (LSTM) variant of RNNs [26] and Convolutional Neural Networks [28] (CNN), Transformers only use a mechanism called self-attention to capture dependencies between the input and output. The current state-of-the-art NLP models for programming languages such as CodeT5 [14], CodeBERT [15] and PolyGlotCodeBERT [16] are all based on the Transformer architecture [25]. F. Transfer Learning Pre-trained Transformers-based language models, such as RoBERTa [29], CodeBERT [15] and CodeT5 [14] utilise a pre-train then fine-tune paradigm. The bespoke paradigm was initially introduced by Kenton and Toutanova. In this paradigm, the models are first trained in an unsupervised manner on a large unlabelled dataset. These pre-trained models can then be fine-tuned to perform a more specialised task, such as summarisation. Transfer learning uses the knowledge that is obtained in one task to solve a different task. It allows the creation of general models that are trained once on massive datasets. These general models, which contain general domain knowledge can then be fine-tuned for a specific downstream task. This approach is quicker and requires less training data than training a model on the downstream task from scratch [30]. III. CAPYBARA DATASET We require a dataset of decompiled functions labelled with a descriptive summary to create and assess our solution. This dataset should be relatively large to suit the ‘data-hungry’ nature of deep-learning models. Furthermore, the dataset needs to feature a diverse set of data representative of our solution’s actual real-life use case. A. Data Collection To create such a large and diverse dataset we made use of BinSwarm [7], an existing dataset of aligned decompiled and stripped decompiled functions. BinSwarm collects C-based projects from Github. The projects are filtered to only include those that are actively being developed, using Travis CI and built for Ubuntu Linux. The projects are built using Docker. The resulting binaries are then copied and stripped, and both the stripped and unstripped binaries are decompiled using Ghidra. The functions are extracted from the stripped and unstripped decompiled code and aligned with the source code. The BinSwarm dataset only contains aligned tuples of source code and (stripped-) decompiled functions. We extract documentation from the original source code files to add descriptive comments to this dataset. To that end, we depend on the documentation included in the source code by the original authors in the form of single and multiline comments. We locate the functions in the unbuilt project files and align the decompiled functions with the comments in the source code using srcML.11 To extract any documentation located directly before a function signature. A high-level overview of the entire process is shown in Figure 2. A function’s documentation often also contains other details besides the descriptive summary. We found that C projects do not follow a single documentation standard. For example, Javadoc for Java has a short one-line description or summary for each method at the beginning of the multiline comment 10BinSwarm: https://hub.docker.com/r/binswarm/cbuilds 11srcML: https://www.srcml.org/ • **Abstract Syntax Tree:** The authors of the CodeSearchNet dataset [20] additionally, remove any samples that do not parse into an AST. We choose to omit this step since all of our samples have been successfully compiled and have thus at one point been parsed into an AST by the compiler. **B. Dataset Preparation** a) **Synthesis of Demi-stripped Code:** From the dataset of decompiled functions, we also create another dataset. We emulate the process of stripping by removing all the identifiers from the decompiled code and replacing them with placeholders. For clarity, we call this demi-stripped data. Like the stripped dataset, the identifiers are all removed, but this is only done after the decompilation process. The decompiler still had access to the identifiers and could use the symbol table during decompilation. Most importantly, this demi-stripped dataset still has the same structure and control flow as the unstripped decompiled dataset and avoids any decompilation issues arising from stripping. b) **Data Split:** The dataset is split into a train, test and validation set. These sets constitute approximately, 80%, 10% and 10% [19] of the complete dataset. As recommended by Shi et al. and LeClair and McMillan, we prevent leakage of vocabulary and code patterns between the sets, by sampling the sets in a cross-project manner [13, 19]. This means that an entire project gets assigned to one of the sets, and functions from the same project cannot be assigned to different sets. The projects in the test and validation set are the same across all datasets. c) **Duplication:** Large corpora of code, like the corpus gathered by BinSwarm, tend to have a high degree of duplication [19]. As a result, snippets of code that are relatively unchanged appear in multiple parts of the corpus. This can be in the form of copied, generic or auto-generated functions. These functions will appear in multiple repositories and might be duplicated across the training and testing data. Besides exact duplicates, near-duplicates can also occur. Near-duplicates differ in a few minor aspects like additional code comments or different function names. While removing exact duplicates is relatively fast and straightforward, removing near-duplicates is much more challenging and computationally intensive [33]. The issue with code duplication in classical code summarisation is that the models and tools are supposed to be used to generate summaries for new and unseen code. The evaluation metrics should therefore measure the generalisation of the tool on new samples [33]. Duplicates and near-duplicates are not defined as new samples. A user of such a tool could simply look these samples up. Furthermore, large, high-capacity models like CodeT5 with 220M [14] or CodeBERT with 128M [15] parameters, have a large capacity to memorise duplicated code [33]. However, the use case outlined in this work is more akin to deobfuscation. As explained by Allamanis, deobfuscation could be a use case where duplicates are valid and part of the true distribution of the problem [33]. Compiled code contains a lot of duplicate code, and understanding this code is still difficult and essential for understanding the binary. While regular source code allows the reader to look up code snippets, decompiled binaries have an additional obfuscation applied. We, therefore, focus on the model’s performance on code with duplicates as we believe duplicates to be part of the true distribution of the data, but we also report the deduplicated results. **C. Dataset Properties** Table I shows the size of the processed dataset. Of the 2.1M aligned decompiled functions, we extract documentation for 215k of them, and we found that the majority of samples, 1.5M did not have any documentation at all. Furthermore, BinSwarm only provided us with 415k aligned stripped samples, and we can extract documentation for only 14k of these samples. <table> <thead> <tr> <th>Dataset</th> <th>Including duplicates</th> <th>Deduplicated</th> </tr> </thead> <tbody> <tr> <td>C/Demi/Decom</td> <td>214,587</td> <td>79,673</td> </tr> <tr> <td>Stripped</td> <td>14,245</td> <td>7,826</td> </tr> </tbody> </table> **TABLE I: Number of functions in dataset** The vast majority of documentation is in the form of multi-line comments as opposed to single-line or double-slash comments. We found that the documentation and comments had a mean length of 42.60 and 8.14 tokens, respectively. Figure 4 shows the distribution of the number of tokens in source code and decompiled code. The source and decompiled code have a mean length of 399 and 779 tokens, respectively. Decompiled code also has close to double the LOC of source code, with means of 30.77 and 53.42 lines for source and decompiled, respectively. The majority of decompiled functions are compiled with optimisation level -O2, with a similar number of -O1 and -O3 samples and relatively few -O0 samples. Stripped data has a very even distribution of optimisation levels, with only -O0 having significantly fewer samples. Note that there are more optimisation levels than shown in Figure 5, for brevity the different levels are grouped into their base optimisation. level. -Oa is grouped with -O0, -Of and -Og are grouped with -O1, -Os is grouped with -O2. We also observe some samples with an optimisation level higher than -O3 (-O8 and -O7), as specified by the GCC documentation, these levels are equivalent to -O3.14 IV. BinT5 We select CodeT5 [14] as the base-model for our experiments since it is the highest-scoring publicly-available model on the CodeXGLUE [31] Code Summarisation benchmark.15 CodeT5 is a programming language model built on the T5 (Text-to-text Transfer Transformer) architecture [34] and pre-trained on a mix of supervised and unsupervised tasks. CodeT5 employs an encoder-decoder architecture. In contrast to other models, CodeT5 is trained using both unimodal (PL only) and bimodal (NL-to-PL) tasks in eight programming languages. This bimodal training allows CodeT5 to perform strong cross-modal tasks such as code summarisation and code generation (PL-to-NL). Many other models only use the data and languages included in the CodeXGlue dataset [15, 16, 31], while CodeT5 also uses a mined dataset of C and C++ code for its pre-training objectives [14]. The inclusion of C training data should help the model with the CAPYBARA dataset. There could be some overlap in the training data between CAPYBARA and the dataset used by Wang et al. which would cause leakage, we address these concerns in Section VII. CodeT5 also utilises the transfer learning paradigm, which allows us to train the model with relatively little data. In this case, we make use of the CodeT5-base model, which was trained on mixed upstream tasks by the authors [14]. We fine-tune this model on the code summarization task on CAPYBARA. An overview of how we applied the model to create BinT5 is provided in Figure 6. V. Experimental Setup To assess the effectiveness of our approach, we first evaluate the performance of the model, we then identify the aspects of the data that make this task inherently difficult, and we finally investigate aspects of the datasets and their influence on the complexity of the task. 14GCC optimisation levels: https://gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc/Optimize-Options.html#Optimize-Options 15CodeXGLUE benchmark: https://microsoft.github.io/CodeXGLUE/ A. Research Questions In the context of the study, we thereby formulate the Research Questions (RQ) as follows. RQ1: How effective are fine-tuned Transformer-based models at decompiled code summarisation? To investigate the application of existing models to binaries using CAPYBARA, we set a baseline by training a model on the code summarisation task on the source C-code dataset. We then train a summarisation model on both the decompiled and stripped dataset. We use the evaluation metrics to compare the performance of the different models. RQ2: Which aspects of the input contribute most to model performance? We investigate which aspects of decompiled code increase the difficulty of the task. We, therefore, look at the impact of the symbol table on decompilation, for this, we fine-tune a model on the demi-stripped dataset and compare it to the other models. We also investigate the importance of the function name by removing just the function name from the decompiled code. Furthermore, we investigate the impact of the optimisation level by exploring the performance per optimisation level. RQ3: What is the impact of dataset properties on model performance? We finally investigate how the construction of CAPYBARA influences the models. To answer the final research question we remove the duplicates from the datasets and retrain the models, after which we compare the performance to the baselines. Furthermore, we investigate the impact of dataset size, by incrementally reducing the size of the training sets. B. Baselines To first establish a performance baseline, we train a CodeT5-base model on the summarisation task on source C. Note that only samples which are aligned with decompiled code are included in the source C dataset. The baseline is used to compare the decompiled C, stripped decompiled C and the demi-stripped datasets to the source code. C. Evaluation Metrics We evaluate the performance between the reference summary from CAPYBARA and the candidate summary produced by BinT5 using the EM, BLEU-4 [35], ROUGE-L [36] and, METEOR [37] metrics. a) Exact Match (EM): The simplest metric is the EM which scores a prediction one if it matches its reference exactly and zero otherwise. b) BLEU-4: The most widely used metric in the code summarisation task is the Bilingual Evaluation Understudy Score (BLEU) [13]. BLEU-4 produces a percentage number between 0 and 100, which defines the similarity between a candidate and a set of reference sentences. BLEU-4 calculates the cumulative 4-gram precision scores, the number of matching 4-grams divided by the total number of 4-grams in the candidate sentence [35]. The unigrams and bigrams account for the adequacy of the candidate while the longer three and 4-grams account for fluency. To prevent short sentences the result is multiplied by a brevity penalty as well. A smoothing function is applied to prevent sequences with no matching 4-grams to score zero [38]. While Shi et al. recommend BLEU-4 with smoothing method 4 [13], we opted to use the Moses [39] implementation of BLEU-4 which uses smoothing method 2 since this is also utilised by CodeSearchNet, CodeXGlue and CodeTS [14, 20, 31]. c) ROUGE-L: ROUGE or Recall-Oriented Understudy for Gisting Evaluation, is a package which includes several metrics, the most popular among them is ROUGE-L [36]. ROUGE-L is more recall oriented than BLEU-4. ROUGE-L simply finds the longest common subsequence (LCS) between the reference and the candidate. Note that the words do not need to be consecutive but they have to be in order. d) METEOR: METEOR or Metric for Evaluation with Explicit Ordering [37] uses word lists and stemming to also take synonyms into account and calculates the harmonic mean of the unigram precision and recall. Similar to ROUGE-L, METEOR is more recall-focused. METEOR has a higher correlation with human judgement than BLEU-4 [19] at the sentence level. D. Data deduplication To create a deduplicated version of the CAPYBARA dataset we make use of a fork\textsuperscript{16} of the near-duplicate-code-detector [33]. We use this tool to compare all the datasets’ functions and find clusters of near-duplicate functions. We randomly select one function per cluster and discard the rest from the dataset. We use the standard tool configuration as recommended by Allamanis. Of the removed duplicates, we observe that a relatively large number originates from common libraries, such as SQLite\textsuperscript{17}, that are packaged with binary programs. Thus a certain amount of duplication is also likely to occur “in the wild”. E. Configuration We process and visualise the data with Pandas 1.4.3 and Ghidra 10.0.4\textsuperscript{18}. FastText 1.0.3 with the largest lid.176.bin \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|} \hline & BLEU-4 & EM & METEOR & ROUGE-L \\ \hline C & 60.83 & 52.19 & 65.33 & 66.51 \\ DeconC & 58.82 & 48.92 & 63.14 & 64.51 \\ Stripped & 11.26 & 1.85 & 14.50 & 17.25 \\ \hline \end{tabular} \caption{Result of fine-tuning CodeT5-base on mined datasets} \end{table} model is used to detect languages. We train the model using Transformers version 4.16.2 running on Torch 1.9.0+cu111 in the nvidia/cuda:11.4.0-base docker container image. We share a Docker image with all the libraries required to run BinT5 pre-installed on DockerHub\textsuperscript{19}. A grid search of the optimal settings was infeasible from a time perspective, so we performed training mainly using the recommended settings from the CodeT5-base model [14]. We double the source length for the decompiled, stripped, and semi-stripped code to 512 tokens instead of the standard 256 tokens used for the source code to compensate for the fact that the average length of decompiled code is almost twice as long as the source code. We trained the model on a machine with an NVIDIA GeForce RTX3080 with 10GB of VRAM and an AMD Ryzen Threadripper 3990X 64-Core Processor with 192GB of RAM running Ubuntu 20.04.4 LTS. The GPU is running Nvidia driver version 510.60.02 with Cuda 11.6. The authors of CodeT5 used an NVIDIA A100 GPU with 40GB of VRAM for fine-tuning [14]. To compensate for the lack of memory, we reduced the batch size to 2, which was the maximum length that could still fit in the VRAM, we increase the ‘gradient_accumulation_steps’ to 24 to still achieve the effective standard batch size of 48. VI. RESULTS We present the results of our experiments to answer the research questions, results are grouped per research question. The metrics are calculated for each sample from the test set, and the average scores are presented. A. RQ1: Model Effectiveness The performance of the CodeT5-base model on each of the datasets is presented in table II. We found that the decompiled code model generally produced good summaries, evidenced by the BLEU-4 score of 58.82, which is slightly lower than the baseline set by the source code. The stripped model mainly produced unusable summaries, as evidenced by the BLEU-4 score of 11. The high EM score could be an indication of a high duplication factor. Initial experiments with GraphCodeBERT [40] and PolyglotGraphCodeBERT [16] base models fine-tuned on CAPYBARA show performance around 5 and 3 BLEU-4 lower, respectively. This is a relatively small difference, especially considering the model size. This shows that the performance of BinT5 does not heavily depend on the additional pre-training \textsuperscript{16}Near Duplicate Code Detector: https://github.com/SERG-Delft/near-duplicate-code-remover \textsuperscript{17}SQLite: https://www.sqlite.org/index.html \textsuperscript{18}It is not recommended to use Ghidra versions before 10.1 since these versions have not been patched against a Log4J RCE \textsuperscript{19}BinT5 Docker Image: https://hub.docker.com/r/aalkaswan/bint5/tags on C and C# performed by Wang et al. Furthermore, this result shows that it is improbable that significant dataset leakage has taken place. We found a relatively large difference between the number of recovered decompiled and striped decompiled functions. This can likely be attributed to the fact that Ghidra struggles a lot more with recovering stripped functions. Recall that the symbol table commonly contains information regarding the location and name of functions. When this table is dropped, the start- and endpoints of functions are hard to infer by automatic tools, especially since many functions get inlined, and JUMP instructions replace CALL instructions. Aside from difficulties in demarcating functions, it is also difficult to align the associated source code function with the decompiled function. With unstripped code, the function name remains, meaning the functions can be aligned using the name. We attempted to utilise an existing solution by Alves-Foss and Song called Jima [41] to find function boundaries. Jima is the current state-of-the-art tool for function boundary detection in stripped binaries. The tool is implemented as a plugin for Ghidra, but in our experiments, we find no statistical difference between the base performance of Ghidra and Jima on our dataset. The difficulties in extracting stripped functions, make training and applying a model to stripped binaries challenging. ### B. RQ2: Input Properties As can be observed in Table III, the summaries produced by the demi-stripped model were substantially worse than the decompiled model, but most were still very usable, evident by the BLEU-4 score above 44. Just removing the function name gave quite similar results to demi-stripping. We find that the loss of identifiers significantly lowers the performance of the model, but stripped code also suffers from decompilation faults, which seem to have a much larger impact on the model performance. Hence, the performance of BinT5 on demi-stripped code can be viewed as more representative of the actual model and not impacted by faults introduced by Ghidra. Table IV shows the average score per optimisation level. We can observe that -O0 and -O2 perform better than -O1 and -O3. Recall that -O0 is completely unoptimised, and that the vast majority of our decompiled dataset is compiled with -O2, which would explain why those optimisation levels perform better. ### C. RQ3: Dataset Properties The performance of the base model on each of the deduplicated datasets is presented in table V: <table> <thead> <tr> <th>Dataset</th> <th>BLEU-4</th> <th>EM</th> <th>METEOR</th> <th>ROUGE-L</th> <th>ΔBLEU-4</th> </tr> </thead> <tbody> <tr> <td>C</td> <td>45.86</td> <td>32.87</td> <td>46.06</td> <td>47.53</td> <td>14.97</td> </tr> <tr> <td>DecomC</td> <td>42.48</td> <td>28.08</td> <td>25.33</td> <td>27.66</td> <td>16.34</td> </tr> <tr> <td>Demi</td> <td>25.38</td> <td>14.51</td> <td>42.47</td> <td>44.47</td> <td>18.83</td> </tr> <tr> <td>Stripped</td> <td>7.19</td> <td>0.00</td> <td>4.75</td> <td>5.50</td> <td>4.07</td> </tr> </tbody> </table> We find that the influence of deduplication on our model’s performance is relatively small on source code, at only 24%. Duplicates have a relatively large impact on the decompiled (28%) and demi-stripped (43%) code. Deduplication also greatly decreases the EM rate across the board. Duplicates have a relatively large impact on performance, but even with the duplicates removed the model still produces many high-quality summaries. The experiments on deduplication show that the model seems to have a deeper understanding of the data and is not simply reproducing previously seen samples. As can be seen in Figure 7, the dataset size does not have much of an impact, the model can be trained with half or a quarter of the training samples without suffering a considerable hit to performance. This could be attributed to the high duplication factor of our dataset. It could also be because the model was already pre-trained well by Wang et al. and requires very little data for fine-tuning. This is a testament to the relative ease with which these models could be extended to decompiled code. We also performed experiments where we did not apply the filtering rules provided by CodeXGlue and where we always mined the first sentence of any type of documentation. While... we were able to collect around 480K decompiled samples, the model performed substantially worse, only scoring 36.97 and 33.26 BLEU-4 on C and decompiled code, respectively. These results show that the dataset quality also heavily impacts the model performance. VII. DISCUSSION In the previous section, we found that BinT5 shows considerable performance for decompiled code and demi-stripped code on both regular as well as deduplicated data. While this is a promising result, we conduct a small investigation of the decompiled samples. We will put our observations on identifiers into the context of the extreme summarisation task. Based on this we discuss the implications of our work. Finally, we will close this section by discussing the threats to validity. A. Exploration of Results To explore the results of BinT5 we pick 25 high and 25 low-scoring samples from the test set of the deduplicated decompiled dataset. High samples have a BLEU-4 score higher than 75 while low-scoring samples have a score lower than 25. a) High Samples: With the high-performing samples BinT5 tends to produce summaries which are very close to the references. For instance, BinT5 produced Print description of a datatype in XML against the baseline Dump description of a datatype in XML. Of the 25 high-scoring samples we found that all have counterparts with a similar function summary in the training set. These functions also tend to have similar names, but their decompiled function body was significantly different, which is likely why deduplication didn’t remove these functions. b) Low Samples: From the low-performing samples we observe that many summaries produced by BinT5 are semantically very similar to the reference. For instance, the function \texttt{vl\_set\_simd\_enabled}\footnote{Colmap/Colmap:vl\_set\_simd\_enabled: https://github.com/colmap/colmap/blob/87b3aa325f865fb91378be29e9ac1e085e28b67/lib/VLFeat/generic.c#L1070} has the reference Toggle usage of SIMD instructions while BinT5 produced Enable or Disable the Simd Channel. This sample scores a BLEU-4 score of 0.0, because of the limitations around the BLEU-4 metric, while for a human evaluator the output is still very usable. Similarly, for some samples, BinT5 produces shorter summaries containing shorthands. The reference Check if the given nickname is blocked for “normal client” use against Check whether nick is blocked, also scores poorly. Of the 25 low-scoring samples we observe that around 11 are semantically similar to the reference and likely very useful for understanding the function. B. Identifiers and Extreme Summarisation We find a relatively small difference in performance between source code and decompiled code. This indicates that in-function comments and variable names are relatively unimportant for the model performance. Although Ahmed and Devanbu observed that identifiers might be more important than syntax in the code-summarisation task [16], we can further conclude that the function name is explicitly essential for model performance. Removing just the function name from the decompiled samples, as opposed to removing all identifiers in demi-stripping, results in slightly higher performance than demi-stripped code, which indicates a very high dependence on the name of the function in the code summarisation task, which is a logical finding in the context of the extreme code summarisation task. The extreme code summarisation task, as proposed by Alamanis et al. aims to reproduce the function name given a function body [16, 42]. It is framed as a summarisation problem where the output is around 3 tokens in length, instead of the 10+ tokens that regular code summarisation targets. We found similar results when performing this task with our dataset, namely, high performance on regular decompiled code (with function names removed) and low performance on stripped code. A manual assessment of the stripped data shows that many of the aligned functions were not decompiled properly. We find that many functions are cut-off after a few instructions because the decompiler did not recover the full control flow. Other functions are missing side effects, like changes to global variables. C. Implications We propose a novel solution to aid reverse engineers in their work. If the application of NLP to binaries gets significantly better, and the limitations around stripping and other obfuscation techniques get resolved, it would have severe implications for the cybersecurity domain. On one hand, it could help malware analysts understand novel malware and its weaknesses quickly. Software can be analysed to find possible vulnerabilities and malicious payloads. Source code can be reconstructed for old binaries for which the source code is lost. But on the other hand, attackers can leverage these same methods to find and exploit vulnerabilities and lift intellectual property from binaries. CASYBARA itself could be used to create and assess neural decompilation, to perform a deeper investigation into the extreme summarisation task, or to simply train a code summarisation model on C code. CASYBARA consists of a large corpus of C and decompiled C code, which could be used to pre-train language models, such that these models could support decompiled code out-of-the-box. While our work focused on decompiled code, our observations show some limits of transformer-based models and their applicability to different data. Our dataset can help and inspire other researchers to improve upon our work. We hope other researchers use this dataset to train and evaluate their own models. Furthermore, the process outlined in Chapter III could help others construct standardised datasets for other tasks and languages. D. Threats to Validity **Internal Validity** questions if other factors could have affected the outcome. The training and evaluation data contains a significant amount of noise, either in the form of badly decompiled functions or incorrect documentation. We carefully collect and process the data, but we are unable to know to which extent the documentation matches the original code. While machine learning models (and specifically NLP models) should be able to handle noisy data, this might introduce some bias into the models. CodeT5 was also pre-trained on a C and C# dataset, this dataset is unpublished and we were unable to reach the authors. Some data leakage might have taken place, but as explained in Section VI it is unlikely that it had much of an impact. To prevent this threat from arising in any future studies, we make CAPYBARA publicly available. External Validity refers to the generalisability of our results. This work only focuses on stripping and compiler optimisations as a means of resisting binary analysis, other techniques like control flow obfuscation and packing are also used to prevent reverse engineering. Other works focus on unpacking and deobfuscation, so we consider our work orthogonal to theirs. The data gathered for CAPYBARA were exclusively from open-source projects. Decompiling closed-source projects is explicitly forbidden by some EULAs and the lack of source code documentation makes it difficult to evaluate using reference summaries. However, reverse engineering open-source software is not very useful in practice, since the source code is readily available. Closed-source software might have different data distribution and will present other challenges like obfuscation. Finally, only functions that decompile (Ghidra produces any output) and that are documented, are represented in CAPYBARA. This is most apparent in the stripped dataset, where we can only recover a small fraction of the total number of functions. A deeper investigation into new decompilation techniques for stripped code, specifically into the aspect of function boundary detection is left as future work. Construct Validity relates to the adequacy of the theoretical constructs and the use of appropriate evaluation metrics. The leading metric in our evaluations does not capture semantic meaning. While BLEU-4 is the most popular metric for this task, its reliability has been called into question [43, 44]. We, therefore, included other metrics, which do take semantics into account, in our evaluation. Finally, our entire approach hinges on the assumption that function summaries, as they are used for source code, are useful for binary analysis. Whether or not this is actually the case, should be further investigated with a qualitative user study, this is left as future work. VIII. RELATED WORK Binary reverse engineering and the use of NLP for software engineering are vast and active fields, so we select and discuss the closest state-of-the-art works in the field. We categorise the studies into identifier recovery and binary translation. Finally, we will discuss the open challenges and the relation of our own work to these challenges. a) Recovering Identifiers from Stripped Binaries: Debin [5] aims to recover debug information from stripped binaries. The authors use a tree-based classification and a probabilistic graph-based model. All the variable names and types are jointly recovered using a maximum a posteriori probability inference. VarBERT [45] uses a Transformer-based NLP model for the task of variable name recovery. The authors pre-trained a BERT model which is then fine-tuned to predict the names and types from unstripped binaries. FUNCRE [7] uses a pre-trained and fine-tuned ROBERTA [29] model to predict usages of inlined library functions. Recall that compilers with optimisations enabled can inline functions in the binary (Chapter II). The authors use indelible markers, which do not get destroyed by the compiler, to mark usages of library functions and to construct a dataset and train a model. b) Binary Translation: Neutron [10] frames decompilation as a neural machine translation problem and utilises an Attention-LSTM-based neural translation network to translate disassembled binaries back to C source code. The binaries are not stripped and do not have any optimisations enabled. The translations created by Neutron can contain syntax errors, so the authors apply regular expressions to create a tailor-made syntax checker. Neutron achieves high accuracy on the translation task, but only on unstripped and non-optimised code. c) Our Novelty: Several aspects have not been properly addressed and investigated. The application of code summarisation methods to decompiled code has not been addressed by any work at all. Furthermore, some works on binary code fail to take compiler optimisations into account [10]. We, therefore, investigate the application of code summarisation methods to decompiled code and we enable compiler optimisations. IX. CONCLUSION In this paper, we proposed a new automatic binary code summarisation task. With this new task, we also introduce CAPYBARA, a novel dataset to train and evaluate models on this task, with both mined as well as synthetic data. Paired with this dataset, we train BinT5, a Transformer-based code summarisation model to show the effectiveness of CAPYBARA. We used BinT5 to further explore the datasets, outlining the inherent difficulties in the data. We found that while BinT5 shows considerable performance on regular decompiled code, but its performance is being hampered by the decompiler on stripped code, evidenced by BinT5s strong performance on demi-stripped code. Furthermore, we found that while duplicates have a large impact on the model, their presence is not paramount to the model’s performance. Finally, we observe that BinT5 could be trained with just a fraction of the samples in CAPYBARA. Our work has shown that a well-known and well-studied task from the source code domain [13], namely source code summarisation, can be applied to binary code. This is only one of the many different applications of NLP for code. Our paper constitutes the first step in the application of source code NLP methods to such tasks on binary code. REFERENCES [25] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, Authorized licensed use limited to: TU Delft Library. Downloaded on November 14,2023 at 09:27:47 UTC from IEEE Xplore. Restrictions apply.
{"Source-Url": "https://pure.tudelft.nl/ws/portalfiles/portal/166897536/Extending_Source_Code_Pre_Trained_Language_Models_to_Summarise_Decompiled_Binaries.pdf", "len_cl100k_base": 10646, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 45331, "total-output-tokens": 13563, "length": "2e13", "weborganizer": {"__label__adult": 0.000453948974609375, "__label__art_design": 0.0002562999725341797, "__label__crime_law": 0.00032520294189453125, "__label__education_jobs": 0.0003991127014160156, "__label__entertainment": 5.269050598144531e-05, "__label__fashion_beauty": 0.00015783309936523438, "__label__finance_business": 0.00013530254364013672, "__label__food_dining": 0.0002884864807128906, "__label__games": 0.00046706199645996094, "__label__hardware": 0.0008831024169921875, "__label__health": 0.0003995895385742187, "__label__history": 0.00015020370483398438, "__label__home_hobbies": 8.07642936706543e-05, "__label__industrial": 0.0002689361572265625, "__label__literature": 0.00022017955780029297, "__label__politics": 0.00017309188842773438, "__label__religion": 0.00037980079650878906, "__label__science_tech": 0.006740570068359375, "__label__social_life": 8.988380432128906e-05, "__label__software": 0.00452423095703125, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00027370452880859375, "__label__transportation": 0.0004200935363769531, "__label__travel": 0.00017368793487548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54501, 0.04707]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54501, 0.35885]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54501, 0.89023]], "google_gemma-3-12b-it_contains_pii": [[0, 949, false], [949, 1278, null], [1278, 5833, null], [5833, 11744, null], [11744, 15391, null], [15391, 17391, null], [17391, 22562, null], [22562, 26878, null], [26878, 32691, null], [32691, 36851, null], [36851, 42740, null], [42740, 48876, null], [48876, 54363, null], [54363, 54501, null]], "google_gemma-3-12b-it_is_public_document": [[0, 949, true], [949, 1278, null], [1278, 5833, null], [5833, 11744, null], [11744, 15391, null], [15391, 17391, null], [17391, 22562, null], [22562, 26878, null], [26878, 32691, null], [32691, 36851, null], [36851, 42740, null], [42740, 48876, null], [48876, 54363, null], [54363, 54501, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54501, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54501, null]], "pdf_page_numbers": [[0, 949, 1], [949, 1278, 2], [1278, 5833, 3], [5833, 11744, 4], [11744, 15391, 5], [15391, 17391, 6], [17391, 22562, 7], [22562, 26878, 8], [26878, 32691, 9], [32691, 36851, 10], [36851, 42740, 11], [42740, 48876, 12], [48876, 54363, 13], [54363, 54501, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54501, 0.04695]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
67e02669051d58c2c01fe2c9e9c948eb59d51f09
Sketched Answer Set Programming Sergey Paramonov, Christian Bessière, Anton Dries, Luc de Raedt To cite this version: HAL Id: lirmm-02310677 https://hal-lirmm.ccsd.cnrs.fr/lirmm-02310677 Submitted on 10 Oct 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sketched Answer Set Programming Sergey Paramonov KU Leuven Leuven, Belgium sergey.paramonov@kuleuven.be Christian Bessiere LIRMM CNRS Montpellier, France bessiere@lirmm.fr Anton Dries KU Leuven Leuven, Belgium anton.dries@kuleuven.be Luc De Raedt KU Leuven Leuven, Belgium luc.deraedt@kuleuven.be Abstract—Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models can be hard. We propose a novel method, called Sketched Answer Set Programming (SkASP), aimed at facilitating this. In SkASP, the user writes partial ASP programs, in which uncertain parts are left open and marked with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. SkASP then synthesises a complete ASP program. This is realized by rewriting the SkASP program into another ASP program, which can then be solved by traditional ASP solvers. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karps 21 NP-complete problems and on publicly available ASP encodings. Index Terms—inductive logic programming, constraint learning, answer set programming, sketching, constraint programming, relational learning I. INTRODUCTION Many AI problems can be formulated as constraint satisfaction problems that can be solved by state-of-the-art constraint programming (CP) [34] or answer set programming (ASP) techniques [27]. Although these frameworks provide declarative representations that are in principle easy to understand, writing models in such languages is not always easy. On the other hand, for traditional programming languages, there has been significant attention for techniques that are able to complete [25] or learn a program from examples [17]. The idea of program sketching is to start from a sketched program and some examples to complete the program. A sketched program is essentially a program where some of the tests and constructs are left open because the programmer might not know what exact instruction to use. For instance, when comparing two variables \( X \) and \( Y \), the programmer might not know whether to use \( \leq \) or \( > \) or \( \neq \) instead (while also specifying the domain of \( ? = \), that is, which concrete operators are allowed). By providing a few examples of desired program behaviour and a sketch, the target program can then be automatically found. Sketching is thus a form of "lazy" programming as one does not have to fill out all details in the programs; it can also be considered as program synthesis although there are strong syntactic restrictions on the programs that can be derived; and it can be useful for repairing programs once a bug in a program has been detected. Sketching has been used successfully in a number of applications [24, 35, 19] to synthesise imperative programs. It is these capabilities that this paper brings to the field of ASP. As a motivating example assume one needs to solve the Peacefully Coexisting Armies of Queens, a version of the \( n \)-queens problem with black and white queens, where queens of the same color do not attack each other. One might come up with the following sketched program (where \( R_w \) (\( C_b \)) stand for the variable representing the row (column) of a white (black) queen): \[ \text{Sketch 1: Peacefully Coexisting Armies of Queens} \] \[ \begin{align*} 1 & : - \quad \text{queen}(w, R_w, C_w), \quad \text{queen}(b, R_b, C_b), \quad R_w \neq R_b. \\ 2 & : - \quad \text{queen}(w, R_w, C_w), \quad \text{queen}(b, R_b, C_b), \quad C_w \neq C_b. \\ 3 & : - \quad \text{queen}(w, R_w, C_w), \quad \text{queen}(b, R_b, C_b), \quad R_w \neq R_b \Rightarrow C_w \neq C_b. \end{align*} \] This program might have been inspired by a solution written in the constraint programming language Essence available from the CSP library [22]. Intuitively, the sketched ASP specifies constraints on the relationship between two queens on the rows (first rule), columns (second rule) and diagonals (third rule), but it expresses also uncertainty about the particular operators that should be used between the variables through the built-in alternatives for \( ? = \) (which can be instantiated to one of \( =, \neq, <, >, \leq, \geq \) and for \( + \) (for arithmetic operations). When providing an adequate set of examples to the ASP, the SkASP solver will then produce the correct program. The key contributions of this paper are the following: 1) we adapt the notion of sketching for use with Answer Set Programming; 2) we develop an approach (using ASP itself) for computing solutions to a sketched Answer Set Program; 3) we contribute some simple complexity results on sketched ASP; and 4) we investigate the effectiveness and limitations of sketched ASP on a dataset of 21 typical ASP programs. II. ASP AND SKETCHING Answer Set Programming (ASP) is a form of declarative programming based on the stable model semantics [15] of logic programming [22]. We follow the standard syntax and semantics of ASP as described in the Potassco project [15]. A program is a set of rules of the form \( a \leftarrow a_1, \ldots, a_k, \text{not} \ a_{k+1}, \ldots, \text{not} \ a_o \). A positive or negative atom is called a literal, \( a \) is a positive propositional literal, called a head, and for \( i \) between 1 and \( k \), \( a_i \) is a positive propositional atom; and for \( i \) between \( k+1 \) and \( n \), \( \text{not} \ a_i \) is a negative propositional literal. The body is the conjunction This work has been partially funded by the ERC AdG SYNTH (Synthesising inductive data models) of the literals. A rule of the form $a \leftarrow$ is called a fact and abbreviated as a, and a rule without a head specified is called an integrity constraint (a is $\bot$ in this case). Conditional literals, written as $a : l_1, \ldots, l_n$, and cardinality constraints, written as $\min{l_1, \ldots, l_n}, \max$, are used ($l_1, \ldots, l_n$ are literals here, and $\min, \max$ are non-negative integers). A conditional atom holds if its condition is satisfied and a cardinality constraint is satisfied if between $\min$ and $\max$ literals hold in it. Furthermore, as ASP is based on logic programming and also allows for variables, denoted in upper-case, the semantics of a rule or expression with a variable is the same as that of its set of ground instances. We restrict the ASP language to the NP-complete subset specified here. For more details on ASP, see [13], [10]. We extend the syntax of ASP with sketched language constructions. Instead of allowing only atoms of the form $p(t_1, \ldots, t_n)$, where $p/n$ is a predicate and the $t_i$ are terms (variables or constants), we now allow to use sketched atoms of the form $?q(t_1, \ldots, t_n)$ where $?q$ is a sketched predicate variable with an associated domain $d_q$ containing actual predicates of arity $n$. The meaning of the sketched atom $?q(t_1, \ldots, t_n)$ is that it can be replaced by any real atom $p(t_1, \ldots, t_n)$ provided that $p/n \in d_q$. It reflects the fact that the programmer does not know which $p/n$ from $d_q$ should be used. Sketched atoms can be used in the same places as any other atom. We also provide some syntactic sugar for some special cases and variants, in particular, we use a sketched inequality $X \triangleq Y$, a sketched arithmetic operator $X \triangleright Y$ (strictly speaking, this is not a sketched predicate but an operator, but we only make this distinction where needed), and sketched negation $?not p(X)$ (which is, in fact, a sketched operator of the form $?not \not\text{atom}$): it always has as input a positive atom and its domain is $\{\text{atom},\not\text{atom}\}$, where $\not\text{atom}$ is a syntactically new atom, which represents the negation of the original atom. The domain of $X \triangleq Y$ is the set $\{=, \neq, <, \geq, \leq, \bot\}$, where $\top$ is the atom that is always satisfied by its arguments, the domain of $X \triangleright Y$ is the set $\{+, -, \times, \geq, \leq\}$ where $\times$ is defined as $[a-b]$, and the domain of $?not$ is $\{\emptyset, \not?\}$. An example of sketched inequality can be seen in Line 2 of Figure 1(a), examples of sketched predicates and negation in Figure 1(b) and sketched arithmetic in Line 5 of Sketch 1. A sketched variable is a sketched predicate, a sketched negation, a sketched inequality or a sketched arithmetic operator. The set of all sketched variables is referred to as $S$. Predicate $p$ directly positively (negatively) depends on $q$ iff $q$ occurs positively (negatively) in the body of a rule with $p$ in the head or $p$ is a sketched predicate and $q$ is in its domain; $p$ depends (negatively) on $q$ iff $(p,q)$ is in the transitive closure of the direct dependency relation. A sketch is stratified iff there is no negative cyclic dependency. We restrict programs to the stratified case. An example is a set of ground atoms. A preference is a function from $\emptyset$ (possible substitutions) to $\mathbb{Z}$. A substitution $\emptyset$ is preferred over $\emptyset'$ given preferences $f$ iff for all $s_i \rightarrow d_i \in \emptyset$ and $s_i \rightarrow d_i' \in \emptyset'$ it holds that $f(s_i \rightarrow d_i) \geq f(s_i \rightarrow d_i')$ and at least one inequality is strict. First, $f(\emptyset)$ is constant, all substitutions are equal and there are no preferences (all equally preferred). Because specifying preferences might impose an extra burden on the user, we also provide default preferences for the built-in sketched variables (like inequality, etc). cf. the experimental section. The Language of Sketched Answer Set Programming (SkASP) supports some of the language features of ASP. The language of SkASP has the following characteristics: - it allows for a set of rules of the form $a \leftarrow b_1, \ldots, b_n, \not c_1, \ldots, \not c_m$; - predicates (such as a predicate $p/n$ or comparison $\leq$) and operators (such as arithmetic $+, -, \times$) in these rules can be sketched; Definition 1 (The Problem of Sketched Answer Set Programming). Given a sketched answer set program \( P \) with sketched variables \( S \) of domain \( D \) and preferences \( f \), and positive and negative sets of examples \( E^+ \) and \( E^- \), the Sketched Answer Set Problem is to find all substitutions \( \theta : S \rightarrow D \) preferred by \( f \) such that \( \theta P \cup \{ e \} \) has an answer for all \( e \) in \( E^+ \) and for no \( e \) in \( E^- \). The decision version of SkASP asks whether there exists such a substitution \( \theta \). III. Rewriting Schema One might consider a baseline approach that would enumerate all instances of the ASP sketch, and in this way produce one ASP program for each assignment that could then be tested on the examples. This naive grounding and testing approach is, however, infeasible: the number of possible combinations grows exponentially with the number of sketched variables. E.g., for the sketch of the Radio Frequency Problem \([7]\) there are around \( 10^5 \) possible assignments to the sketched variables. Multiplied by the number of examples, around a million ASP programs would have to be generated and tested. This is infeasible in practice. The key idea behind our approach is to rewrite a SkASP program \( (P, S, D, f, E^+, E^-) \) into an ASP program such that the original sketching program has a solution iff the ASP program has an answer set. This is achieved by 1) inserting decision variables into the sketched predicates, and 2) introducing example identifiers in the predicates. The original SkASP problem is then turned into an ASP problem on these decision variables and solutions to the ASP problem allow to reconstruct the SkASP substitution. The rewriting procedure has four major steps: example expansion, substitution generation, predicate reification and constraint splitting. (Here we follow the notation on meta-ASP already used in the literature \([21], [11]\).) Example Identifiers To allow the use of multiple examples in the program, every relevant predicate is extended with an extra argument that represents the example identifier. The following steps are used to accommodate this in the program, denoted as \( \text{meta}E(P,S,E^+,E^-) \). 1) Let \( SP \) be the set of all predicates that depend on a predicate occurring in one of the examples. 2) Replace each literal \( p(t_1,\ldots,t_n) \) for a predicate \( p \in SP \) in the program \( P \) by the literal \( p(X_1,\ldots,X_n) \), where \( X_i \) is a variable not occurring in the program. 3) Add the guard \( \text{examples}(E) \) (the index of all pos./neg. examples) to the body of each rule in \( P \). 4) For each atom \( p(t_1,\ldots,t_n) \) in the \( i \)-th example, add the fact \( p(i,t_1,\ldots,t_n) \) to \( P \). 5) For each positive example \( i \), add the fact \( \text{positive}(i) \) to \( P \), and for each negative one, the fact \( \neg\text{positive}(i) \). E.g., the rule in Line 2 of Figure 1a becomes Line 9 of Figure 1b and the example in Line 14 is rewritten as in Line 2. Substitution Generation We now introduce the decision variables, referred as \( \text{meta}D(S,D) \): 1) For each sketched variable \( s_i \) with domain \( D_i \) \[ \{ \text{decision}_s_i(X) : d_i(X) \} \] 2) For each value \( v \) in \( D_i \), add the fact \( d_i(v) \). This constraint ensures that each answer set has exactly one value from the domain assigned to each sketched variable. This results in a one-to-one mapping between decision atoms and decision substitution \( \theta \). An example can be seen in Lines 4 and 5 of Figure 1b. Predicate Reification We now introduce the reified predicates, referred as \( \text{meta}R(S,D) \): 1) Replace each occurrence of a sketched atom \( s(t_1,\ldots,t_n) \) in a rule of \( P \) with the atom \( \text{reified}_s(D_1,\ldots,D_n) \), and add \( \text{decision}_s(D) \) to the body of the rule. 2) For each sketched variable \( s \) and value \( d_i \) in its domain, add the following rule to \( P \): \[ \text{reified}_s(d_i,X_1,\ldots,X_n) \leftarrow d_i(X_1,\ldots,X_n). \] where the first argument is the decision variable for \( s \). Thus, semantically \( \text{reified}_s(d_i,X_1,\ldots,X_n) \) is equivalent to \( d_i(X_1,\ldots,X_n) \) and \( \text{decision}_s(d_i) \) indicates that the predicate \( d_i \) has been selected for the sketched variable \( s \). Notice that we focused here on the general case of a sketched predicate \( ?p(\ldots) \). It is straightforward to adapt this for the sketched inequality, negation and arithmetic. Examples of reification can be seen in Lines 7 of Figure 1b for the sketched ? of the sketch in Figure 1a and in Lines 11, 12 for reified negation. Integrity Constraint Splitting (referred as \( \text{meta}C(P) \)) 1) Replace each integrity constraint \( \leftarrow \text{body} \cdot \text{positive}(E) \) \[ \text{negsat}(E) \leftarrow \text{body} \cdot \text{negative}(E) \] 2) And add the rule to the program: \[ \leftarrow \text{negative}(E), \neg\text{negsat}(E). \] This will ensure that all positives and none of the negatives have a solution. For example, the constraint in Line 4 of Figure 1a is rewritten into a positive constraint in Lines 14, 15 and into a negative in Lines 16, 17. Another important result is that the preferences do not affect the decision complexity. Proofs can be found in the supplementary materials. Theorem 1 (Sound and Complete Sketched Rewriting). A sketched ASP program \((P,S,D,f,E^+,E^-)\) has a satisfying substitution \(\theta\) iff the meta program \[ T = \text{meta}(P,S,E^+,E^-) \cup \text{meta}(S,D) \cup \text{meta}(R,D) \cup \text{meta}(P) \] has an answer set. Interestingly, the sketched ASP problem is in the same complexity class as the original ASP program. Theorem 2 (Complexity of Sketched Answer Set Problem). The decision version of propositional SkASP is in NP. Proof. Follows from the encoding of SkASP into a fragment of ASP which is in NP. Dealing with preferences Preferences are, as we shall show in our experiments, useful to restrict the number of solutions. We have implemented preferences using a post-processing approach (which will also allow to apply the schema to other formalisms such as CP or IDP [8]). We first generate the set of all solutions \(O\) (without taking into account the preferences), and then post-process \(O\). Basically, we filter out from \(O\) any solution that is not preferred (using tests on pairs \((o,o')\) from \(O \times O\)). The preferences introduce a partial order on the solutions. For example, assume \(?p\) \((?q\) can take value \(p_1\) \((q_1\) with preference of 1 and \(p_2\) \((q_2\) with 2. If \((p_1,q_2)\) \((p_2,q_1)\) are the only solutions, they are kept because they are incomparable – \((1,2)\) is not dominated by \((2,1)\) (and vice versa). If \((p_1,q_1)\) is also solution, \((p_1,q_2)\) \((p_2,q_1)\) are removed because they are dominated by \((p_1,q_1)\). While the number of potential Answer Sets is in general exponential for a sketched ASP, the number of programs actually satisfying the examples is typically rather small (in our experiments, below 10000-20000). If that is not the case, then the problem is under-constrained and it needs more examples. No user would be able to go over a million of proposed programs. IV. System Extension: Aggregates and Use-Case An aggregate \(#agg\) is a function from a set of tuples to an integer. For example, \(#count\{Column,Row\} : queen(Column,Row)\} counts the number of instances \(queen(Column,Row)\) at the tuple level. Aggregates are often useful for modeling. However, adding aggregates to non-disjunctive ASP raises the complexity of an AS existence check, unless aggregate dependencies are stratified [12]. It is possible to add aggregates into our system under the following restrictions: stratified case, aggregates occur in the body in the form \(N = #agg\{...\}\), and with the only variable \(?#\), where \(#agg\) can be \(max\), \(min\), \(count\) and \(sum\). This immediately allows us to model problems such as Equal Subset Sum (for details, see the repository), where one needs to split a list of values, specified as a binary predicate \(val(ID,Value)\) into two subsets, such that the sum of both subsets is equal. Essentially, we sketch the constraint of the form: \[ S = \#agg\{Z_1,...,Z_n : cond(X_1,...,X_k,Y_1,...,Y_h,Z_1,...,Z_n)\}, \] where \(S\) is an integer output, and \(Y_1,...,Y_h\), shortened as \(\bar{Y}\) (\(X\) and \(Z\) are the same kind of shortening) are bound to other atoms in the rule, to which we refer as \(external(\bar{Y})\) ("external" with respect to the condition in the aggregate; it is simply shortening for a conjunction of atoms, which share variables with the condition in the predicate). To give an example of \(\bar{X}, \bar{Y}, \bar{Z}\) in a simple context: if we were to compute an average salary per department in a company, we might have written a rule of the form: \[ \text{avg}_{sal}(A,D) :- A = \#avg(S,N: \text{salaries}(N,S,D)), \text{department}(D). \] Then, \(\bar{Z}\) consists of the variable \(S\) and \(D\) is the external variable (with respect to the condition in the aggregate), i.e., \(\bar{Y}\) and \(\bar{X}\) is composed out of the variable \(N\), since it is neither used in the aggregation, nor in the other atoms outside of the aggregate. A sketched aggregate \(\forall h\), can be reified similarly to the regular sketched atoms, i.e.: \[ \text{reified}(S,\text{sum},\bar{Y}) \leftarrow S = \#\text{sum}(\bar{Z} : cond(\bar{X},\bar{Y},\bar{Z})), \text{external}(\bar{Y}). \] Formally, each aggregate can be seen as an expression of the form: \[ S = \#agg\{Z_1,...,Z_n : cond(X_1,...,X_k,Y_1,...,Y_h,Z_1,...,Z_n)\}, \] where \(S\) is an integer output, and \(Y_1,...,Y_h\), shortened as \(\bar{Y}\) (\(X\) and \(Z\) are the same kind of shortening) are bound to other atoms in the rule, to which we refer as \(external(\bar{Y})\) ("external" with respect to the condition in the aggregate; it is simply shortening for a conjunction of atoms, which share variables with the condition in the predicate). To give an example of \(\bar{X}, \bar{Y}, \bar{Z}\) in a simple context: if we were to compute an average salary per department in a company, we might have written a rule of the form: \[ \text{avg}_{sal}(A,D) :- A = \#avg(S,N: \text{salaries}(N,S,D)), \text{department}(D). \] Then, \(\bar{Z}\) consists of the variable \(S\) and \(D\) is the external variable (with respect to the condition in the aggregate), i.e., \(\bar{Y}\) and \(\bar{X}\) is composed out of the variable \(N\), since it is neither used in the aggregation, nor in the other atoms outside of the aggregate. A sketched aggregate \(\forall h\), can be reified similarly to the regular sketched atoms, i.e.: \[ \text{reified}(S,\text{sum},\bar{Y}) \leftarrow S = \#\text{sum}(\bar{Z} : cond(\bar{X},\bar{Y},\bar{Z})), \text{external}(\bar{Y}). \] Similarly for other aggregate functions; the same rules, e.g., the example extension, apply to aggregate reification. With aggregates we can sketch a significantly larger class of problems. Consider the problem from the Functional Pearls Collection: “Finding celebrities problem” [5]. Problem statement: “Given a list of people at a party and for each person the list of people they know at the party, we want to find the celebrities at the party. A celebrity is a person that everybody at the party knows but that only knows other celebrities. At least one celebrity is present at the party.” The sketch core looks as follows (names are shortened): \[ n(N) := N = \#\{ P : p(P) \} \], \[ :- c(C), p(C), n(N), S = \#\{ P : k(P,C), p(P) \}, S < N-1. \] \[ :- c(C), p(C), not c(P), k(C,P). \] The last rule is an integrity constraint verifying that no celebrity, c, knows a person who is not a celebrity. The first line sketches a rule that should find what aggregation metric on the people (unary predicate person, p) should be used in the problem. The sketched rule in the second line makes use of this metric, denoted as n, and says that an aggregation should be performed on the binary "knows" predicate, k, (indicating that two persons know each other); so the outcome of the sketched aggregation on the connection between people should be compared to an overall metric on all people individually. V. Experimental Evaluation For the experimental evaluation we have created a dataset consisting of 21 classical combinatorial problems among --- 1 ASP code: hakank.org/answer_set_programming/finding_celbrities.lp4 which most are NP-complete. For the problem list and precise sketch specifications used in the experiments, we refer to Table I. All problems, their code, and implementation details, can be found in the accompanying Github repository: https://github.com/SergeyParamonov/sketching **Dataset of Sketches.** The key challenge in evaluating program synthesis techniques such as SkASP is the absence of benchmark datasets (as available in more typical machine learning tasks). At the same time, although there are many example ASP programs available in blogs, books or come with software, these typically employ advanced features (such as incremental grounding, optimization or external sources) which are not supported by SkASP as yet. Therefore we had to design our own dataset in a systematic way (and put it in the public domain). The dataset is based on a systematic concept (the 21 problems by Karp). When we could find encodings for these problem (such as Sudoku in Figure 1c from [18] and Hamiltonian Cycle in Figure 1a from [13]) we took these as a starting point and then created a solution according to the standard generate and test development methodology of ASP. Specifically (see Q3) we looked for different encodings in the public domain of ASPs favorite – the N-queens problem (these encoding can tackle even its NP-complete version [16]). After creating all the ASP programs, we turned them into sketches by looking for meaningful opportunities to use sketched variables. We introduced sketched variables to replace operators (equalities and inequalities), to replace arithmetic (such as plus and minus) and to decide whether to use negated literals or not, and to make abstraction of predicates for which another predicate existed with the same signature. Finally, we had to select the examples in a meaningful way, that is, we selected examples that would be informative (as a user of SkASP would also do). Positive examples were actually selected more or less random, negative examples are meant to violate one or more of the properties of the problem. Furthermore, we also tried to select examples that carry different information (again as a user of SkASP would do). We selected between 4 and 7 examples for each model. Where relevant in the experiments, we sampled the sketched variables (e.g. Q3) or the examples (e.g. Q3) **Experimental questions** are designed to evaluate how usable is SkASP in practice. Users want in general to provide only a few examples (Q1-Q3), to obtain a small number of solutions (ideally only one) (Q1-Q3), the examples should be small (Q4), the solutions should be correct (all), want to know whether and when to use preferences (Q5), and how robust the technique is to changes in the encoding (Q5) as it is well known in ASP that small changes in the encoding can have large effects. Finally, they are typically interested in how the learned programs change as the sketches become more complex (Q5). With this in mind, we have designed and investigated the following experimental questions: - **Q1:** What is the relationship between the number of examples and the number of solutions? How many examples does it take to converge? - **Q2:** Are preferences useful? - **Q3:** What is the effect of the number of sketched variables on the convergence and correctness of the learned programs? - **Q4:** Do models learned on examples with small parameter values generalize to models with larger parameter values? - **Q5:** What is the effect of encoding variations on the number of solutions and their correctness? **Implementation details and limitations.** The SkASP engine is written in Python 3.4 and requires pyasp. All examples have been run on a 64-bit Ubuntu 14.04, tested in Clingo 5.2.0. The current implementation does not support certain language constructs such as choice rules or optimization. We use the default preferences in the experiments for the built-in inequality sketch X >= Y: namely = and != have equal maximal preference. A user can redefine the preferences. Our experiments indicate that for other sketched types (e.g., arithmetic, etc) no default preferences are needed. We investigate Q1 by measuring the impact of the number of examples on the number of solutions of the 21 SkASP problems. An interesting observation is that even if the user wants to solve, say the Latin Square 20 x 20, she does not need to provide examples of size 20 x 20 = 400. She can simply provide 3 x 3 examples and our SkASP program will learn the generic Latin Square program (see Figure 1d). Figure 2a shows how the number of solutions for some of our 21 SkASP problems depends on the number of examples. In some cases, 7 examples are sufficient to converge to a single solution e.g., FAP, B&W Queens. On some other problems, however, after 7 examples there still remain many solutions (on average 18 for problems that do not converge). Figure 2b reports the same information as Figure 2a for all 21 problems: the average number of solutions; the average on the 9 that converge within 7 examples, referred to as the easy problems; and the average on the 12 that still have several solutions after 7 examples, referred to as the <table> <thead> <tr> <th>Problem</th> <th># Sketched</th> <th># =</th> <th># !=</th> <th># ?not</th> <th># ?p</th> <th># Rules</th> </tr> </thead> <tbody> <tr> <td>Graph Clique</td> <td>3</td> <td>1</td> <td>0</td> <td>0</td> <td>2</td> <td>4</td> </tr> <tr> <td>3D Matching</td> <td>3</td> <td>3</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>Graph Coloring</td> <td>7</td> <td>4</td> <td>0</td> <td>0</td> <td>3</td> <td>2</td> </tr> <tr> <td>Domination Set</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>Exact Cover</td> <td>7</td> <td>2</td> <td>0</td> <td>1</td> <td>4</td> <td>3</td> </tr> <tr> <td>Sudoku</td> <td>5</td> <td>4</td> <td>0</td> <td>1</td> <td>0</td> <td>4</td> </tr> <tr> <td>B&amp;W Queens</td> <td>5</td> <td>3</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Hitting Set</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>2</td> </tr> <tr> <td>FAP</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>Feedback Arc Set</td> <td>4</td> <td>0</td> <td>0</td> <td>2</td> <td>2</td> <td>3</td> </tr> <tr> <td>Latin Square</td> <td>4</td> <td>4</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> </tr> <tr> <td>Edge Domination</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>FAP</td> <td>5</td> <td>3</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Set Packing</td> <td>4</td> <td>2</td> <td>0</td> <td>0</td> <td>2</td> <td>1</td> </tr> <tr> <td>Clique Cover</td> <td>4</td> <td>3</td> <td>0</td> <td>1</td> <td>0</td> <td>3</td> </tr> <tr> <td>Feedback Set</td> <td>5</td> <td>0</td> <td>0</td> <td>5</td> <td>0</td> <td>3</td> </tr> <tr> <td>Edge Coloring</td> <td>3</td> <td>3</td> <td>0</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Set Splitting</td> <td>5</td> <td>2</td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>N Queens</td> <td>6</td> <td>4</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Vertex Cover</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>4</td> </tr> <tr> <td>Subg. Isomorph.</td> <td>5</td> <td>2</td> <td>0</td> <td>1</td> <td>2</td> <td>4</td> </tr> </tbody> </table> TABLE I: Dataset summary: the number of sketched variables, of rules, of particular types of sketched variables, e.g., “# ?not”, indicates how many atoms with the sketched negation are in the program. hard problems. When SkASP does not converge to a unique solution, this leaves the user with choices, often amongst equivalent ASP programs, which is undesirable. For problems that do not converge after a few examples, we propose to use preferences, as provided by our SkASP framework. We use the default preference described earlier. We investigate $Q_2$ by measuring again the impact of the number of examples on the number of solutions. In Figure 2c, we observe that all problems have converged in less than 7 examples (under default preferences). The impact of preferences on the speed of convergence is even more visible on the whole set of problems, as reported in Figure 2b. The number of solutions with preferences is smaller, and often much smaller than without preferences, whatever the number of examples provided. With preferences, all our 21 problems are learned with 7 examples. To analyze the number of solutions in $Q_3$, we look into the convergence of FAP by varying the number of sketched variables. The original sketched program of FAP contains 5 sketched variables. We vary it from 2 to 5 by turning 3, 2, 1, or 0 sketched variables into their actual value (chosen randomly and repeated over multiple runs). As expected, in Figure 2d, we observe that the more there are sketched variables in the SkASP, the slower the number of solutions decreases. Furthermore, the number of sketched variables has a greater impact on the convergence without preferences, as we see in Figure 2e. After 3-5 examples under preferences we have fewer than 10 solutions, while without preferences there are still dozens or hundreds of solutions. To analyze correctness in $Q_3$, we need first to define it. Informally, we mean that the program classifies arbitrary examples correctly, i.e., positive as positive, etc. A typical metric to measure this is accuracy. However, there are no well defined arbitrary positive and negative examples for the most problems: what is an arbitrary random example for Feedback Arc Cover? Problems like Sudoku and N-queens do have standard examples because they are parameterized with a single parameter, which has a default value. Furthermore, for the standard 8-queens we know all solutions analytically, i.e., 92 combinations. Another issue is that the negative and positive classes are unbalanced. The usual way to address this issue is to use precision, i.e., True Positive / (True Positive + False Positive). (Recall is typically one because the incorrect programs produce way too many solutions that include the correct ones.) In Figure 21 we see that in all cases we were able to reach the correct solution (here the locations of sketched variables were fixed as specified in the dataset); while increasing the number of sketched variables generally decreases the precision. To investigate Q3, we have used the Latin Square from Listing 13. We have used examples for Latin Square 3 × 3, and verified its correctness on Latin Square 4 × 4 (which can be checked analytically because all solutions are known). We have discovered, that there is an inverse dependency between number of solutions and accuracy, see Figure 34. This happens because there are typically very few useful or “intended” programs while there are lot of incorrect ones. To investigate Q5, we have focused on the N-queens problem and collected several encodings from multiple sources: Potassco, Hakank.org, an ASP course by Tran Cao Son and our encoding. Whereas all the encodings model the same problem they show significant variations in expressing constraints. To reduce the bias in how the sketched variables are introduced and systematically measure the parameters, we pick sketched variables randomly (inequalities and arithmetic) and use the same examples from our dataset (randomly picking the correct amount) for all models. In Figure 35, while there is a certain variation in the number of solutions, they demonstrate similar behavior. For each encoding we have introduced 5 sketched variables and measured the number of solutions and precision. In Figure 35 we see that there is indeed a slight variation in precision, with 3 out of 4 clearly reaching above 90% precision, one reaching 100% and one getting 82%. Thus, despite variations in encoding, they generally behave similarly on the key metrics. The results have been averaged over 100 runs. Overall, we observe that only few examples are needed to converge to a unique or a small group of equivalent solutions. An example where such equivalent solutions are found is the edge coloring problem, where two equivalent (for undirected graphs) constraints are found: \[ \text{\textbullet}\quad \text{color}(X, Y_1, C), \text{color}(X, Y_2, C), \ Y_1 \neq Y_2. \] \[ \text{\textbullet}\quad \text{color}(X_1, Y, C), \text{color}(X_2, Y, C), \ X_1 \neq X_2. \] For this problem these two constraints are equivalent and cannot be differentiated by any valid example. An interesting observation we made on these experiments is that the hardness (e.g., in terms of runtime) of searching for a solution of a problem is not directly connected to the hardness of learning the constraints of this problem. This can be explained by the incomparability of the search spaces. SkASP searches through the search space of sketched variables, which is usually much smaller than the search spaces of the set of decision variables of the problem to learn. VI. RELATED WORK The problem of sketched ASP is related to a number of topics. First, the idea of sketching originates from the area of programming languages, where it relates to so called self-completing programs [25], typically in C [24] and in Java [19], where an imperative program has a question mark instead of a constant and a programmer provides a number of examples to find the right substitution for it. While sketching has been used in imperative programming languages, it has – to the best of the authors’ knowledge – never been applied to ASP and constraint programming. What is also new is that the sketched ASP is solved using a standard ASP solver, i.e., ASP itself. Second, there is a connection to the field of inductive (logic) programming (ILP) [9], [28], [17]. An example is meta-interpretive learning [29], [30] where a Prolog program is learned based on a set of higher-order rules, which act as a kind of template that can be used to complete the program. However, meta-interpretive learning differs from SkASP in that it induces full programs and pursues as other ILP methods a search- and trace-based approach guided by generality, whereas SkASP using a constraint solver (i.e., ASP itself) directly. Furthermore, the target is different in that ASPs are learned, which include constraints. SKASP relates to meta-interpretation in ASP [11] in rule and decision materialization. The purpose is, however, different: they aim at synthesizing a program of higher complexity ($\Sigma^P_2$) given programs of lower complexity (NP and Co-NP). There are also interesting works in the intersection of ILP, program synthesis and ASP [21], [23], [33]. The ILASP system [22] learns an ASP program from examples, and a set of modes, while minimizing a metric, typically the number of atoms. This program, learned completely from scratch, is not necessarily the best program from the user’s point of view and may limit the possibility to localize the uncertainty based on the user’s knowledge of the problem. Indeed, if all sketched predicates are added in the modes with corresponding background knowledge, then the set of hypotheses of sketched ASP is a subset of ILASP. However, if we specify a sketched constraint :\( \text{p}(X), \text{q}(Y), X \neq Y \) as modes for ILASP [22], it would learn a program like :\( \text{p}(X) \) (minimal program), but that is clearly not the program intended by the sketch. Furthermore, we compute all preferred programs instead of a single solution. Third, there is also work on constraint learning, where the systems such as CONACQ [4], [2] and QUACQ [4] learn a set of propositional constraints, and ModelSeeker [1] learns global constraints governing a particular set of examples. The subject has also been investigated in ILP setting [20]. However, the idea in all these approaches is to learn the complete specification of CSPs from scratch. Our setting is probably more realistic from a user perspective as it allows to use the knowledge that the user no doubt possesses about the underlying problem, and also requires much less examples. On the other hand, SkASP also makes, as other sketching approaches, the strong assumption that the intended target program is an instance of the sketched one. This may not always be true, for instance, when rules are missing in the program. This is an interesting issue for further research. Fourth, our approach is related to debugging of ASP [14, 31]. Unlike SkASP such debuggers can be used to locate bugs, but typically do not provide help in fixing them. On the other hand, once a bug is identified, SkASP could be used to repair it by introducing a sketch and a number of examples. The approach of [20] is based on classical ILP techniques of generalization and specification and does not provide the freedom to indicate uncertain parts of the program. VII. DISCUSSION AND CONCLUSIONS Our contribution is four-fold: we have introduced the problem of sketched ASP; we have provided a rewriting schema for SkASP; we have created a dataset of sketches and we have evaluated our approach empirically demonstrating its efficiency and effectiveness. User interaction is an interesting future direction, namely to suggest constraints and examples. For the former, if we are not able to reject a negative example, we can construct a constraint that would reject the negative examples and none of the positive examples. As for the examples, if we have two solutions to a problem, we can generate an example discriminating between them and ask user to clarify it, while this might not always be possible, since symmetric assignments might lead to semantically identical programs. In practice, however, this might be an important addition to simplify sketching for end users. Another direction is to incorporate non-constant preference handling into the model using the extensions of ASP for preference handling, such as asprin [6]. REFERENCES During the experiments, we stumbled upon a peculiar bug. One ASP encoding that we discovered in a public repository worked mostly by pure luck. The following constraint :-queen(X1,Y1), queen(X2,Y2), X1<X2, abs(Y1-X1)==abs(Y2-X2). works because abs is not actually absolute value but an uninterpreted function, essentially it checks X == Y, and that is indeed the found solution. (This kind of bugs would be extremely hard to find using traditional debuggers, since technically the encoding produced correct solutions.). Also, while working on the aggregate extension use-case, we discovered a subtle bug: the case of a single celebrity was not handled correctly. In both cases, the author has been contacted and models have been updated.
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-02310677/file/ictai18.pdf", "len_cl100k_base": 10815, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 33165, "total-output-tokens": 13185, "length": "2e13", "weborganizer": {"__label__adult": 0.000335693359375, "__label__art_design": 0.0004432201385498047, "__label__crime_law": 0.0004117488861083984, "__label__education_jobs": 0.001822471618652344, "__label__entertainment": 8.130073547363281e-05, "__label__fashion_beauty": 0.0001735687255859375, "__label__finance_business": 0.0002434253692626953, "__label__food_dining": 0.0003659725189208984, "__label__games": 0.0006237030029296875, "__label__hardware": 0.0007863044738769531, "__label__health": 0.0005578994750976562, "__label__history": 0.0002765655517578125, "__label__home_hobbies": 0.0001462697982788086, "__label__industrial": 0.0005049705505371094, "__label__literature": 0.0003643035888671875, "__label__politics": 0.0002732276916503906, "__label__religion": 0.0005469322204589844, "__label__science_tech": 0.05303955078125, "__label__social_life": 0.00012063980102539062, "__label__software": 0.00849151611328125, "__label__software_dev": 0.92919921875, "__label__sports_fitness": 0.0002956390380859375, "__label__transportation": 0.0005645751953125, "__label__travel": 0.0001885890960693359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46887, 0.0304]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46887, 0.64131]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46887, 0.88542]], "google_gemma-3-12b-it_contains_pii": [[0, 1005, false], [1005, 6655, null], [6655, 11081, null], [11081, 16521, null], [16521, 23710, null], [23710, 30785, null], [30785, 32616, null], [32616, 38952, null], [38952, 46887, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1005, true], [1005, 6655, null], [6655, 11081, null], [11081, 16521, null], [16521, 23710, null], [23710, 30785, null], [30785, 32616, null], [32616, 38952, null], [38952, 46887, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46887, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46887, null]], "pdf_page_numbers": [[0, 1005, 1], [1005, 6655, 2], [6655, 11081, 3], [11081, 16521, 4], [16521, 23710, 5], [23710, 30785, 6], [30785, 32616, 7], [32616, 38952, 8], [38952, 46887, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46887, 0.10132]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a6d1a7c21b26752c12222722208bcab9bd57928b
Parser-Directed Fuzzing Björn Mathis CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany bjoern.mathis@cispa.saarland Rahul Gopinath CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany rahul.gopinath@cispa.saarland Michaël Mera CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany michael.mera@cispa.saarland Alexander Kampmann CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany alexander.kampmann@cispa.saarland Matthias Höschele CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany hoeschele@cs.uni-saarland.de Andreas Zeller CISPA Helmholtz Center for Information Security Saarbrücken, Saarland, Germany zeller@cispa.saarland Abstract To be effective, software test generation needs to well cover the space of possible inputs. Traditional fuzzing generates large numbers of random inputs, which however are unlikely to contain keywords and other specific inputs of non-trivial input languages. Constraint-based test generation solves conditions of paths leading to uncovered code, but fails on programs with complex input conditions because of path explosion. In this paper, we present a test generation technique specifically directed at input parsers. We systematically produce inputs for the parser and track comparisons made; after every rejection, we satisfy the comparisons leading to rejection. This approach effectively covers the input space: Evaluated on five subjects, from CSV files to JavaScript, our pFuzzer prototype covers more tokens than both randomized and constraint-based approaches, while requiring no symbolic analysis and far fewer tests than random fuzzers. CCS Concepts • Security and privacy → Software security engineering. Keywords fuzzing, test generation, parsers, security 1 Introduction The field of software test generation aims at improving reliability and robustness of software by subjecting it to artificially generated inputs. Since every behavior of a program, including unwanted ones, can be triggered through its inputs, it is important to have valid inputs that reach program functionality without being rejected as invalid, and to have a large variety in these inputs in order to cover a maximum of functionality. So far, both goals have been addressed by two testing strategies. • Traditional fuzzing [37] operates at the lexical level, quickly producing large numbers of inputs with random characters. Fuzzing is very easy to deploy, and quickly finds bugs in lexical and syntactical processing. For programs with nontrivial input languages, though, a pure random approach is unlikely to generate complex input structures such as keywords—already producing a string like "while" by pure chance from letters only is $1 : 26^5$ (11 million), not to speak of a condition or a statement that would have to follow the keyword. • Constraint-based test generation [5] operates at the semantic level, considering the semantics of the program under test. It satisfies conditions on the path towards (yet) uncovered code, using constraint solving and symbolic analysis to easily solve short paths, especially at the function level. The problem of constraint-based test generators, however, is scalability: For nontrivial input languages, they quickly suffer from a combinatorial explosion of paths to be explored. In the above context, a “nontrivial” input language need not be a programming language (although these probably rank among the most complex input languages). The Wikipedia page on file formats [33] lists 1,435 data input formats, from AAC to ZZT; while all of these formats have at least one parser, few of these formats have machine-readable grammars or other language specifications. Even if a program accepts a data exchange format with a fixed syntax such as XML or JSON, finding the valid tags and labels it accepts will be hard. We thus pose the input language challenge: Given a program with a nontrivial input language, the challenge is to find a test generator that covers all lexical and syntactical features of that language. In this paper, we propose a novel syntax-driven approach for the problem of generating plausible inputs. We call it parser-directed fuzzing as it specifically targets syntactic processing of inputs via input parsers. Our assumptions are that the program under test (1) processes input elements (characters) sequentially, that it (2) compares these elements against possible valid values, and that it (3) either accepts inputs as valid or rejects them as invalid—i.e., the typical features of a syntactic parser. We also assume that we can track comparisons of input characters; we do this through dynamic tainting of inputs, which allows us to relate each value processed to the input character(s) it is derived from. Our approach relies on the observation that parsers rely on a lookahead of a limited number of characters, which is very often just a single character. Hence, rather than trying to solve the complete input, we look for any character that lets the parser advance further without erroring out. Our approach, illustrated in Figure 1, specializes towards parsers that process one input character at a time, joining characters into tokens and these again into syntactic structures—that is, the “textbook” approach to parsing. It starts by feeding a small fixed string to the program (in general one random character 1). This string is typically rejected by the input parser. However, on rejection of the input, our fuzzer derives the character comparisons made to each index of the input. That is, it identifies the valid prefix in the input, as well as the character comparisons made to the first invalid character. The fuzzer then corrects the invalid character to pass one of the character comparisons that was made at that index, and the new string is fed back to the parser. This new string will typically fail at the next character, at which point, we repeat the process. This process is continued until the parsing phase ends (that is, the string that was fed to the program is accepted by the parser). The complete valid input is saved as a possible input for the program. After illustrating our approach on a detailed example (Section 2), we present our two key contributions: 1. A test generator aimed at input parsers. Our approach, discussed in Section 3, is the first test generation approach that systematically explores and covers the syntactical input space as accepted by input parsers. It only requires dynamic tainting, tracking of comparisons, and structural coverage information, which is far more lightweight than symbolic analysis. Our approach is guaranteed to produce valid inputs for input parsers that scan ahead a fixed number of characters. 2. A comparison with lexical and semantical test generators. In our evaluation on five subjects from CSV to JavaScript (Section 5), we approximate coverage of the input space by assessing the set of valid tokens produced; clearly, if a test generator fails to produce some language token, it also cannot cover the associated program features. We show that our syntax-driven approach covers the input space better than state-of-the-art “lexical” mutation-based fuzzers such as AFL [37], while requiring fewer tests by orders of magnitude; it also outperforms state-of-the-art “semantic” constraint-based fuzzers such as KLEE [5]. All of our inputs are syntactically valid by construction. After discussing related work (Section 6), Section 7 points out current limitations and directions to future work. Section 8 is the conclusion and thus concludes this paper. 2 Parser-Directed Fuzzing in a Nutshell Consider the following problem: Given a program $P$, how can we exhaustively cover the syntactic features of its input language? To illustrate our approach, let us assume we want to exhaustively test the program $P$. We know nothing about $P$; in particular, we have no documentation or example inputs. What we know, though, is that 1. $P$ accepts some input $I$ sequentially as a string of characters; and that 2. $P$ can tell us whether $I$ is a valid or an invalid input. We further assume that we can observe $P$ processing $I$: Specifically, we need to be able to observe the dynamic data flow of input characters from $I$ as $P$ processes them. Figure 1 illustrates how we explore the capabilities of $P$’s input parser by means of directed test generation. The key idea is to observe all comparisons an input character goes through, and systematically satisfy and cover alternatives, notably on rejection. We start with an empty string as input, which is rejected by $P$ as invalid immediately as EOF is encountered. The EOF is detected as any operation that tries to access past the end of a given argument. This error is fixed in the next round by testing $P$ with a random string, say “$A$” ($I = "A"$). Indeed, this input is also rejected by $P$ as invalid. However, before rejecting the input, $P$ checks $I$ for a number of properties. Only after these checks fail does $P$ reject the input: 1. Does $I$ start with a digit? 2. Does $I$ start with a ‘+’ character? 3. Does $I$ start with ‘-’ or ‘-’? All these conditions are easy to satisfy, though—and this is a general property of parsers, which typically only consider the single next character. Our test generator picks one --- 1Ignoring EOF detection for the time being. See Section 3 for details. We get lucky at this time, however, and detect another character and get "1+B" after again checking for a number of expected characters. If we could have continued with the prefix, we may append a random character or continue with the generated prefix. If we continued with the prefix, we may append a random character and get "1+B" as input. This is rejected, but only after again checking for a number of expected characters that could follow. These would be the same checks already performed on the input "A": digits, parentheses, "+", and "-". We randomly choose the condition Property 2, where the prefix "1+" would be invalid on its own, so we again choose one prefix for further computations. By continuing this process, we thus obtain more and more inputs that systematically cover the capabilities of the parser. In the end, we obtain a set of legal inputs that covers all the conditions encountered during parsing: \[1 \, 1 \, 1 \, 1 \, +1 \, -1 \, 1+1 \, \text{etc.} \] We see that our mystery program takes arithmetic expressions as inputs. Furthermore, instead of randomly guessing a correct combination the input is built character by character. Thus, building a valid input of size \(n\) takes in worst case \(2n\) guesses (assuming the parser only checks for valid substitutions for the rejected character). One might ask: How likely is it that one can instrument the program under test but does not have enough knowledge about the input format to apply more sophisticated fuzzing approaches (like grammar based fuzzing)? Obviously, any test generation approach, including parser-directed fuzzing, can benefit from human knowledge, be it a grammar or just the available keywords. Despite this, there are some good reasons to still use parser-directed fuzzing: - First, our approach is also applicable to binary code as it only relies on dynamic tainting and the comparisons made during execution, both are available for binary code as well. - Second, even if we knew the grammar, it is often not available in a machine readable form and creating a correct grammar is a time consuming task. And even if a formal grammar exists, it could contain errors or encode more or less features than implemented in the code. Concluding, parser-directed fuzzing makes it possible to test the code fully automatic and without any prejudice. 3 Testing Input Parsers What we thus introduce in this work is a test generator specifically built to explore input parsers. Our fuzzer combines the best of both lexical and constraint-based approaches: it solves an easy constraint, namely that it replaces the character that was last compared with one of the values it was compared to. On the other hand, similar to random fuzzing, parser-directed fuzzing produces new inputs rapidly and verifies the correctness of each input using the program under test. With this method, the search space is reduced significantly. That is, the number of possible replacements at each position of the input is constrained by the comparisons that are made at this position (namely the ones the input parser expects to be valid). Identifying a replacement for the last character is computationally cheap, especially compared to full fledged constraint solving. This combination of character replacement and semi-random fuzzing on a small search space makes parser-directed fuzzing efficient and effective compared to its competition. While the idea is simple in principle, it hides some complexities in practice. For example, consider a simple parenthesis input language which require well-balanced open and close parentheses. For this input language, at each step, a parser would compare the last character of a valid prefix with both ‘)’ and ‘)’. Hence, to extend the valid prefix say ‘(((' one could choose either ‘)’ or ‘)’. The problem is that, when one appends an open parenthesis, a corresponding close parenthesis has to be inserted at some point in the future, and given that we are relying on random choice between ‘)’ and ‘)’. The probability of closing after $n$ steps is given by $\frac{1}{n+1}$, and continues to decrease as we add more characters. Hence, relying on random choice to get us a valid complete string does not work in practice. Naively, one could think of using depth- or breadth-first search to explore the possible character replacements. Depth-first search is fast in generating large prefixes of inputs but may not be able to close them properly as we have seen before, and may therefore get stuck in a generation loop. Breadth-first search on the other hand explores all combinations of possible inputs on a shallow level and is therefore helpful in closing prefixes (like appending closing parentheses). Generating a large prefix is, however, hard as we have to deal with the combinatorial explosion. A combination of both would be useful for simple input structures but fails for more complex ones as depth-first search might open too many elements that need to be closed that is beyond the capability of the breadth-first search to close within a given computational budget. pFuzzer uses a heuristic search to guide the input generation through the large search space of possible inputs. It primarily takes structural coverage information into account for deciding which inputs should be executed next. Structural coverage alone though would lead to a kind of depth-first search which would generate large and complex prefixes that are very hard to close and will likely never end up in a valid input. Thus, the heuristic also tries to guide the search to a closing path which will lead to smaller but valid and diverse inputs. The heuristic value used in pFuzzer is based on branch coverage. However, we do not apply coverage naively but rather combine different perspectives on the covered code to help guidance through the search space based on the context. 3.1 Achieving Coverage Algorithm 1 sketches the general generation loop of our approach. We start with an empty set that will later contain the branches covered by valid inputs (Line 2). Furthermore we store all non-executed inputs in a priority queue throughout fuzzing (Line 3). The inputs in the queue are primarily sorted based on the heuristic defined in the procedure in Line 47: first the number of newly covered lines of the parent is taken (Line 48), then the length of the input is subtracted and two times the length of the replacement is added (Line 49). Using the length of the input avoids a depth-first search based on the coverage as very large inputs have less priority. By using the length of the replacement we can lead the algorithm to more interesting values, e.g. values that stem from string comparisons. Such replacements will likely lead to the complex input structures we want to cover. Furthermore, recursive-descent parsers increase their stack when new syntactical features are discovered (e.g. an opening brace) and decrease their stack on characters that close syntactical features (e.g. a closing brace). Therefore, at Line 50 we take the average stacksize between the second last and last comparison into account, larger stack sizes will give less priority to the input. Finally, we add the number of parents to the heuristic value (Line 50). This number defines how many substitutions were done on the search path from the initial input to the current input. Inputs with fewer parents but the same coverage should be ranked higher in the queue to keep the search depth and input complexity low. --- 2The parenthesis language is an instance of a Dyck path. The number of total paths with $2n$ steps that stay positive is $\binom{2n}{n}$. Out of these, that end in 0 after $2n$ steps is $\binom{2n}{n}/(2n+1)$. This is the $n^{th}$ Catalan number $C_n$. Hence, the probability of a closed Dyck path is $\frac{1}{2n+1}$, which after 100 steps is 1% – We ignore those paths that reached zero and rebounded in both denominator and numerator for convenience. 3We omit a concrete implementation of AVGSTACKSIZE here to keep the algorithm short. Algorithm 1 Parser-Directed Fuzzing Algorithm. 1: procedure Fuzz 2: vBr ← ∅ 3: queue ← empty queue 4: input ← random character 5: elnp ← input 6: while True do 7: valid, comps ← runCheck(input, vBr, queue) 8: if not valid then 9: valid, comps ← runCheck(elnp) 10: if not valid then 11: %addInputs(elnp, branches, vBr, comps) 12: end if 13: end if 14: input ← queue.get() 15: elnp ← input + random character 16: end while 17: end procedure 18: procedure addInputs(inp, branches, vBr, comps) 19: %for all c in branches do 20: cov ← heur(branches, vBr, inp, c) 21: new_input ← replace c in inp 22: add new_input to queue based on cov 23: end for 24: end procedure 25: procedure runCheck(inp, vBr, queue) 26: (exit, branches, comps) ← run(inp) 27: if exit = 0 ∧ (branches \ vBr) ≠ ∅ then 28: validinp(inp, branches, vBr, queue, comps) 29: return True, comparisons 30: else 31: return False, comparisons 32: end if 33: end procedure 34: procedure validinp(input, branches, vBr, queue, comps) 35: print(input) 36: vBr ← branches ∪ vBr 37: %for all inp in queue do 38: cov ← heur(inp, branches, vBr, inp, inp.c) 39: reorder inp in queue based on cov 40: end for 41: %addInputs(input, branches, vBr, comps) 42: end procedure 43: procedure heur(branches, vBr, inp, c) 44: cov ← size(branches \ vBr) 45: cov ← cov − len(inp) + 2 * len(c) 46: cov ← cov − avgStackSize( ) + inp.numParents 47: return cov 48: end procedure Each loop iteration while fuzzing consists of two program executions. First the input without a random extension is executed (Line 7), then the input with random extension is executed (Line 9). We use this technique because the input without extension is generated from an input by replacing the last compared character with one of the characters it was compared to (e.g. Line 22), which means that we would never append a character but always replace the last character of the input. Therefore, we need to append a new character to the end of the input, and check if it is used in any comparisons—which in turn means that with a high chance all previous characters are valid. On the other hand, if we always append a new character, PFuzzler may be very unlikely to produce a correct input because as soon as the correct replacement was done, a new appended character will make the input invalid again. By running both we can ensure to not get stuck but also find all valid inputs along the search path. While looking for inputs that cover new code, we first concentrate on the number of branches that were not covered by any valid input beforehand. Those inputs that cover new branches have a higher priority in the queue. This metric gives the strongest evidence on which input prefixes are the most promising to explore further. As the already covered branches only contain branches covered by valid inputs it does not contain branches of error handling code. Hence, simply taking all covered branches would favor invalid inputs at some point and guide the search to use invalid prefixes. To avoid that the search gets stuck at paths with invalid inputs, we only consider the covered branches up to the last accepted character of the input. In particular, we consider all covered branches up to the first comparison of the last character of the input that is compared in the program. 3.2 Making the Input Valid As soon as new code is covered the search algorithm needs to “close” the input, i.e. make it valid. This is important as trying to cover as much code as possible with each and every input will lead to very complex inputs that are hard to close, possibly taking hours of search to find the correct sequence of characters that make the input finally valid. Think about a simple bracket parser looking for the first valid input. Say the parser is able to parse different kinds of brackets (round, square, pointed, . . .). As we never had any valid input any time a different opening bracket is added, more code would be covered and we might end up generating many different opening brackets, closing them though might be hard as one has to generate at each position the correct counter part for the respective opening bracket. To avoid such a generation loop we count each found new branch only once for each \*Not all parsers use an EOF check, hence we need the random extension to check if a new character is expected or the substitution was wrong. input and also taking stack size and input length into account. Hence, we generate small prefixes that are simple to close. After the first few valid inputs are found, the majority of the code is covered. At this point it gets significantly harder to find characters that help closing an input by just considering the newly covered branches themselves. For example, if we already generated an if(·)-statement in a valid input beforehand, the code handling the closing brace would have already been covered, we would not see any new coverage on generating another closing brace for an if(·)-statement. Therefore, we take input length, stack size and number of parents into account to favor those inputs that are less complex and keep the number of opened syntactical features low. This makes it possible to leave coverage plateaus in the search space since often a closing character would for example lead to a lower stack. Finally, to avoid generation of inputs that cover the same path as already generated inputs, fFuzzer keeps track of all paths that were already taken (based on the non-duplicate branches) and ranks inputs based on how often they executed on the same path, ranking those highest that cover new paths. After an input was closed and covered new branches in the code, all remaining inputs in the queue have to be re-evaluated in terms of coverage. Due to the large search space, re-running all inputs again on the subject under test takes too much time. A faster way is storing all relevant information to compute the heuristic along with the already executed input, and simply re-calculating the heuristic value again (e.g. the loop at Line 40). 4 Implementation We have implemented our approach in a fuzzing tool called fFuzzer. fFuzzer takes as input a C program, which it compiles and instruments using LLVM. The instrumentation serves the purpose of parser-directed fuzzing in that it (1) dynamically taints input characters and derived values throughout execution. When read, each character is associated with a unique identifier; this taint is later passed on to values derived from that character. If a value is derived from several characters, it accumulates their taints. Runtime conversion functions such as strcpy() are wrapped such that the taints automatically propagate correctly. Any comparisons of tainted values (mostly character and string comparisons) are (2) tracked as well. To drive the test generation heuristics, the instrumentation also tracks function and branch coverage, specifically (3) the sequence of function calls together with current stack contents, and (4) the sequence of basic blocks taken. fFuzzer is not optimized for speed, and thus its instrumentation adds considerable overhead; as a rule of thumb, executions are slowed down by a factor of about 100. All the collected information is saved after program execution, and then drives test generation for the next program run as detailed in Section 3. 5 Evaluation We evaluated the performance of fFuzzer against KLEE and AFL. We evaluated the fuzzers on two aspects: Code Coverage. The first measure we look at is the code coverage obtained, i.e. how many of the branches in the subject programs would actually be taken. Code coverage is a common metric in testing and test generation; generally speaking, covering a piece of code is necessary to uncover errors in this very code. Input Coverage. The second measure we look at is input coverage obtained, i.e. how many aspects of the input language are actually covered. To this end, we measure how many different tokens are produced and what the characteristics of these tokens are. In general, coverage of input language features is necessary to trigger functionality associated with these features. 5.1 Evaluation Setup <table> <thead> <tr> <th>Name</th> <th>Accessed</th> <th>Lines of Code</th> </tr> </thead> <tbody> <tr> <td>INIH</td> <td>2018-10-25</td> <td>293</td> </tr> <tr> <td>CSVPARSER</td> <td>2018-10-25</td> <td>297</td> </tr> <tr> <td>cJSON</td> <td>2018-10-25</td> <td>2,483</td> </tr> <tr> <td>TINYC</td> <td>2018-10-25</td> <td>191</td> </tr> <tr> <td>MJS</td> <td>2018-06-21</td> <td>10,920</td> </tr> </tbody> </table> For our evaluation, we use five input parsers with increasing input complexity, summarized in Table 1. Starting with the simple input formats ini [3] and csv [20], we also test the tools on json [10], TinyC [22] (a subset of C) and MJS [6] (a subset of JavaScript). We set up all programs to read from standard input (a requirement for AFL), and to abort parsing with a non-zero exit code on the first error (a requirement for fFuzzer). Furthermore, we disabled semantic checking in MJS as this is out of scope for this paper. AFL is run with the standard configuration. As we cannot change the CPU scaling policy on our computing server, AFL_SKIP_CPUFREQ is enabled. Since AFL requires a valid input to start with but we want to evaluate the ability of all tools to generate valid inputs without program specific knowledge, we give AFL one space character as starting point. This character is accepted by all programs as valid while still being generic enough such that we think AFL is in no advantage compared to KLEE and fFuzzer not having any input to start with. KLEE is run with the uclibc, posix-runtime, and optimization options enabled. Furthermore, we run KLEE to only output values if they cover --- 3As fFuzzer is a research prototype and thus we want to keep the engineering effort reasonable, we restrict ourselves to randomly selected recursive descent parsers implemented in C as we need to special handle some library functions (like strncpy()). new code (otherwise KLEE would produce millions of test inputs, for which calculating the coverage would take weeks and is therefore out of scope for this paper). For AFL and KLEE, we determine the valid inputs by running the programs under test and checking the exit code (non-zero means invalid input); pfuzzer by construction only outputs valid inputs that cover new code. Test generation for complex subjects needs time. All tools were run for 48 hours on each subject in a single-core thread on Intel processors with 3.3GHz running Ubuntu 14.04.5 in the KLEE Docker container. All tests were run three times; we report the best run for each tool. 5.2 Code Coverage A general measure to assess the quality of the test suite is code coverage. This metric is used to predict the general chance of a set of tests to find an error in the program. It is important to note that coverage of code is first and foremost a necessity to find bugs in this code. To use it as a universal measure for test quality, one must assume that bugs are distributed evenly across code, and that all bugs are equally likely to be triggered, which typically is not the case. ![Coverage by each tool](image) Figure 2. Obtained coverage per subject and tool. Figure 2 shows the coverage of the valid inputs generated by each tool. Each input is parsed; tinyC and mjs also execute the program. We use gcov to measure branch coverage. All subjects may also contain to some extent code that is used for parsing from a different input source, printing or even generating (e.g. cJSON also functions as a library which contains code to generate a json object). Since it is not always clear which code can be covered and all tools run on the same code, we decided to leave those artifacts in even though they cannot be covered. All tools can still be compared on each individual subject. In one case, we manually fixed an input while(9); to while(0); to avoid an infinite loop during the execution of the generated test input.6 **csv and ini.** Starting with the least complex input structures, csv and ini, we can see that AFL performs better than pfuzzer in terms of coverage. For both subjects AFL has a high advantage: it is random, shallow, and fast. Therefore, it is able to generate many different characters in a short time, almost all of which are accepted by ini and csv. For both subjects, covering all combinations of two characters achieves perfect coverage. As an example, one of the most challenging structures in the inputs of those parsers is the section delimiter in ini which needs an opening bracket followed by a closing bracket. Between those, any characters are allowed. Hence, generating such valid inputs for those subjects is easy. AFL is in advantage here as all those properties are easy to reach and can be covered by exploring the input space randomly. pfuzzer performs worse here as in some circumstances semantic properties of the input are checked, or non-printable characters are used in comparisons which are not covered by our comparison extraction yet. **json.** For json, both KLEE and AFL obtain a higher coverage than pfuzzer, which misses the one feature set of the conversion of UTF16 literals to UTF8 characters. The problem is that the developers of the parser rely on an implicit information flow and we currently do not handle them, because naively tainting all implicit information flows can lead to large overtainting [21]. Therefore, we never reach the parts of the code comparing the input with the UTF16 encoding. Nonetheless, in contrast to AFL, pfuzzer is able to generate keywords such as true, false and null, and cover the associated code. **tinyC.** On tinyC, pfuzzer is actually able to generate inputs that cover more code than AFL. The reason for this lies in the complexity yet simplicity of the implementation of tinyC: the subject accepts rather simple inputs (like simple arithmetic expressions) but they cover only a small part of the code and this coverage is easily achieved. To go beyond these easy to reach code parts, one has to generate more complex structures like loops and branching statements, which parser-directed fuzzing can do. **mjs.** For mjs, AFL achieves a much higher code coverage than pfuzzer. KLEE, suffering from the path explosion problem, finds almost no valid inputs for mjs. Examining code coverage, we found that AFL mostly covers code triggered by single characters or character pairs. Again, as with tinyC, 6Such infinite loops would normally be addressed by implementing a timeout; however, gcov loses all coverage information when the program under test is interrupted. AFL also generates an input that triggers a hang in tinyC. As this is an if-statement which should actually terminate, we are not able to fix the hang. 7For example: the program may read the input character by character in a loop header and processes it in the loop body. All values in the loop body would be tainted, leading to an over-approximation of taints. those parts of the code that require very specific sequences of characters to make the input valid are not covered by AFL. Summarizing, AFL achieves its coverage by brute force, trying out millions of different possible inputs, generating 1,000 times more inputs than pFuzzer. Thus, if one wants to cover shallow code, AFL is the tool to choose. AFL quickly covers code that is easy to reach. However, our manual coverage investigation also shows that as soon as it comes to more structured parts in the program, both AFL and KLEE fail to generate inputs that are able to cover them. Therefore, in the next section, we attempt to capture the input features that characterize these structured parts. 5.3 Input Coverage In parser code, uncovered parts often are specific language constructs that are optional and need to be specifically tested. Think about tinyC: whatever input is given, it must contain at least one expression, therefore expressions are tested in depth anyway. Constructing a valid while loop on the other hand, requires two elements: - The while keyword itself. Such a long keyword is hard to generate by pure chance—even if a fuzzer would generate letters only, the chance for producing it would be only 26^5, or 1 in 11 million. Guidance by code coverage, as implemented in AFL, does not help here, as all five characters take the exact same path through the program. - The elements following the while keyword also must be produced—i.e., a parenthesized expression and at least an empty statement. To assess input quality, we evaluate the variety of tokens generated by each approach—notably, what kind of tokens each of the approaches would be able to generate in its valid inputs. To this end, we first collected all possible tokens by checking the documentation and source code of all subjects and then checked how many different tokens appear in each subject. Tables 2, 3 and 4 contain the numbers of possible tokens per length and for each length a set of examples. Table 2. JSON tokens and their number for each length. <table> <thead> <tr> <th>Length</th> <th># Examples</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>8 { } [] : , number</td> </tr> <tr> <td>2</td> <td>1 string</td> </tr> <tr> <td>4</td> <td>2 null true</td> </tr> <tr> <td>5</td> <td>1 false</td> </tr> </tbody> </table> Table 3. TINYC tokens and their number for each length. <table> <thead> <tr> <th>Length</th> <th># Examples</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>11 &lt; + - ; = { } [ ] identifier number</td> </tr> <tr> <td>2</td> <td>2 if do</td> </tr> <tr> <td>4</td> <td>1 else</td> </tr> <tr> <td>5</td> <td>1 while</td> </tr> </tbody> </table> TINYC comes with few tokens, but a number of keywords, listed in Table 3. As shown in Figure 3, simple constructs needing only one or two characters are easy to generate for AFL; the semi-random approach eventually guesses the correct characters and their respective ordering. KLEE does not find any keyword. pFuzzer is still able to cover 86% of all tokens, missing only the do and else token. AFL still covers 80% of all tokens but also misses a while token; and while KLEE covers 66% of all tokens, it only covers short ones, missing all keywords of TINYC. Table 4. mjs tokens and their number for each length. <table> <thead> <tr> <th>Len</th> <th># Examples</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>27 { [ ( + &amp; ? identifier number ...</td> </tr> <tr> <td>2</td> <td>24 += == /= &amp;=</td> </tr> <tr> <td>3</td> <td>13 === &lt;&lt;= &gt;&gt;&gt; for try let ...</td> </tr> <tr> <td>4</td> <td>10 &gt;&gt;&gt;= true null void with else ...</td> </tr> <tr> <td>5</td> <td>9 false throw while break catch ...</td> </tr> <tr> <td>6</td> <td>7 return delete typeof Object ...</td> </tr> <tr> <td>7</td> <td>3 default finally indexof</td> </tr> <tr> <td>8</td> <td>3 continue function debugger</td> </tr> <tr> <td>9</td> <td>2 undefined string</td> </tr> <tr> <td>10</td> <td>1 instanceof</td> </tr> </tbody> </table> mjs is our most challenging test subject, and it continues the trends already seen before. As shown in Figure 3, KLEE mostly fails, whereas AFL can even generate short keywords. Being able to produce a for deserves a special recommendation here, as a valid for loop needs the keyword for, an opening parenthesis, three expressions ending with a semicolon and a closing brace. As it comes to longer tokens and keywords, AFL is quickly lost, though; for instance, it cannot synthesize a valid input with a typeof keyword. pFuzzer synthesizes a full typeof input and also covers several of the longer keywords, thus also covering the code that handles those keywords. Summing up over all subjects, we see that for short tokens, all tools do a good job in generating or inferring them. The exception is KLEE on mjs, which results in a lower average number: Across all subjects, for tokens of length \( \leq 3 \), AFL finds 91.5%, KLEE 28.7%, and pFuzzer 81.9%. The longer the token, though, the smaller the chance of either AFL or KLEE finding it. pFuzzer, on the other hand, is able to cover such keywords and therefore also the respective code handling those keywords. Across all subjects, for tokens of length > 3, AFL finds 5%, KLEE 7.5%, and pFuzzer 52.5%. This is the central result of our evaluation: Only parser-directed fuzzing is able to detect longer tokens and keywords in the input languages of our test subjects. By extension, this means that only pFuzzer can construct inputs that involve these keywords, and that only pFuzzer can generate tests that cover the features associated with these keywords. At this point, one may ask: Why only 52.5% and not 100%? The abstract answer is that inferring an input language from a program is an instance of the halting problem, and thus undecidable in the first place. The more concrete answer is that our more complex subjects such as tinyC and mjs make use of tokenization, which breaks explicit data flow, and which is on our list of challenges to address (Section 7). The final and pragmatic answer is that a "better" technique that now exists is better than an "even better" technique that does not yet exist—in particular if it can pave the way towards this "even better" technique. 6 Related Work 6.1 Fuzzing Techniques Fuzzing was introduced by Miller et al. [26] for evaluating the robustness of UNIX utilities against unexpected inputs. The main difference between fuzzing and other blackbox test generation methods is that fuzzing relies on very weak oracles—checking only for crashes or hangs, which lets it explore the input space automatically. Miller used a simple random character generator to generate inputs to the programs under test, varying both input length and content randomly. However, this kind of simple fuzzing is ineffective against programs that expect structured inputs such as interpreters and compilers. Hence practitioners rely on different techniques to generate syntactically valid (or near valid) inputs. These techniques are commonly classified as blackbox techniques, whitebox techniques, and their hybrids [24]. 6.1.1 Blackbox Fuzzing Blackbox fuzzing techniques generate input data given some information about the data, ignoring the program under test. These include mutational approaches and generational approaches [27]. Mutational fuzzing approaches [37] start with a few sample inputs, and rely on simple mutations such as byte swaps or character insertion to generate new inputs. These techniques hope to explore the input regions close to the mutation, by maintaining the global structure but varying the local structure sufficiently. These mutations may be evolved based on some fitness function to ensure better exploration of input space. Generational approaches rely on some formal input specification such as the input grammar or a data model to generate valid inputs [34]. Test case generation using grammars has been used for compiler testing [4, 17] from very early on. One of the problems with simple generational approaches has been the unavailability of input models. Another is that simple input models may not adequately capture all constraints. A promising approach is to learn the model [1] and constraints [32] from a sample corpus. ### 6.1.2 Whitebox Fuzzing Whitebox fuzzing techniques make use of program code. Whitebox fuzzing techniques are again classified into those that rely on *dynamic execution* of program, and those that rely on *symbolic execution* of the program. In the *dynamic execution* approach, the program is typically instrumented such that the path taken by an execution for a sample input can be observed. The sample input is then mutated [37], and mutations that take previously unseen paths are given higher preference for further exploration. The input mutation may be guided [8] using dynamic taints [2, 11, 12], and constraint negation of executed paths. In the *symbolic execution* approach, the program is symbolically executed to infer constraints on inputs; a constraint solver then generates inputs satisfying them. SAGE [13], was one of the first whitebox fuzzers, and uses symbolic execution and constraint solving to generate outputs for fuzzing (Sage also relies on seed samples and their executions [14] for the initial collection of constraints). Another is Mayhem [7] by Cha et al. that won the DARPA Cyber Grand Challenge in 2016. Mayhem prioritizes the paths where memory access to symbolic addresses that can be influenced by user input is identified, or symbolic instruction pointers are detected. A similar approach is *concolic execution* and hybrid *concolic execution* which relies on fuzzing for initial exploration but shifts to symbolic execution for resolution of checksums and magic bytes [25, 36]. ### 6.1.3 Hybrid Fuzzing With Hybrid fuzzing techniques, researchers try to infer a program model [31] or input grammar from either observing the behavior of the program on multiple inputs, using formal approaches [1], machine learning based on previously available corpus [9, 15], or observing and summarizing the program execution [19]. The main difference we have with these tools is that they rely on a previously existing corpus of valid inputs for the model generation. The problem with this approach is that, the available inputs often encode assumptions about the program behavior which need be neither correct nor complete. It is precisely these assumptions that we seek to avoid with parser-directed fuzzing. We note that there has been a number of fuzzers that specifically target parsers [16], especially interpreters and compilers [18, 35], and special purpose parsers can often incorporate domain knowledge to obtain better results. ### 6.2 Specific Fuzzing Tools We compare with specific state-of-the-art competitors. **Driller** [30] attempts to combine the advantages of symbolic execution with those of fuzzing. It relies on fuzzing to explore the input space initially, but switches to symbolic execution when the fuzzer stops making progress—typically, because it needs to satisfy input predicates such as magic bytes. Driller uses symbolic execution and is vulnerable to the combinatorial path explosion [8] when trying to reach deep program states. Since pFuzzer does not use symbolic execution, pFuzzer does not suffer from this problem and may be able to achieve deeper coverage. **VUzzer** [28] relies on the key observation that fuzzing can be aided by a feedback loop from control and data flow application features. VUzzer uses taint analysis to infer type of data and offsets in input, which relates to branches. These specific offsets and values are prioritized for mutation. One of the problems with VUzzer is that the position of magic bytes is fixed – the offsets are inferred statically. That is, VUzzer can not deal with magic bytes whose location may be different in different input files [8]. VUzzer is similar to pFuzzer in that it tracks both taint information and character comparisons. However, unlike VUzzer, the taint information and character comparison information in pFuzzer is collected and used dynamically. Secondly, unlike VUzzer, pFuzzer does not require an initial set of seed inputs to operate. **Steelix** [24] is another mutation based fuzzer. It improves upon the state of the art by adding a comparison progress feedback. The comparison progress can avoid problems with multi-byte string comparisons by providing progress information on the string being composed. pFuzzer uses an approach similar to Steelix’s comparison progress. The main difference from Steelix is that the mutations for Steelix are primarily random, with *local exhaustive mutations* for solving magic bytes applied only if magic bytes are found. pFuzzer on the other hand, uses comparisons as the main driver. The mutations always occur at the last index where the comparison failed. **AFL** [37] is a coverage-guided mutational fuzzer that can achieve high throughput. In the absence of instrumentation, it is also able to gracefully degrade to naive fuzzing using blind exploration of input space. The effectiveness of AFL depends highly on initial seed inputs, which it uses to explore the input subspace near the given samples. AFL requires less overhead than pFuzzer since the only instrumentation it requires is tracking coverage. Nonetheless, pFuzzer improves upon AFL in numerous ways. First, pFuzzer leverages taint tracking and character comparison tracking to bypass the requirement of initial seed samples. Second, the mutations produced by pFuzzer are driven by parser behavior, compared to the blind mutations by AFL. AFL-CTP [23] is a transformation pass that converts calls of `strcmp()` and `memcmp()` to nested `if`-statements to help the coverage guidance of AFL. For `strcmp()` and `memcmp()` AFL gets no coverage feedback until they report a match. Splitting the comparison in `if`-statements makes it possible to achieve new coverage on each matching character or byte. The approach only transforms calls with argument size known at compile time (i.e. one argument must be a string literal for `strcmp()` respectively the number of compared bytes must be a constant for `memcmp()`). In our subjects most of the string comparisons do not fulfill this requirement, but even if it was possible to drop this condition, in many parsers code is heavily reused, i.e. different keywords are parsed with the same code and the same comparison function is called for different keywords. Hence, prefixes of different keywords are indistinguishable regarding coverage: the prefix `wh` of the `while` keyword would produce the same coverage as the prefix `fo` of the `for` keyword. pFuzzer, on the other hand, monitors the calls to `strcmp()` dynamically and therefore recognizes the different comparisons made, hence it is able to find and use the different keywords. If indeed it is possible to transform `strcmp()` and `memcmp()` in such a way that for different keywords AFL recognizes new coverage, AFL might be able to achieve similar results in terms of token coverage as pFuzzer does at the moment. Angora [8] is a mutation based fuzzer that improves upon AFL. It incorporates byte-level taint tracking, context-sensitive branch coverage (that can distinguish different program states reached), and type and shape inference and uses search based gradient descent for help with checksum resolutions. Of all the fuzzers, Angora is closest in approach to pFuzzer. We believe that Angora’s technique is relatively heavy-weight in comparison to pFuzzer. This can be seen in [8] Section 5.6, where the authors say that “Angora runs taint tracking once for each input, and then mutates the input and runs the program many times without taint tracking”. Each time a new branch is detected with an input, Angora runs the taint tracking, along with possibly thousands of runs without taint tracking again until it hits on a new branch (Algorithm 1, Line 16 [8]). pFuzzer improves upon Angora in multiple ways. First, unlike Angora, which tries to solve the complete path constraints, pFuzzer is only interested in generating inputs that can pass a specified parser. Further, pFuzzer employs light weight analysis of character comparisons to determine the mutation, while Angora uses search based on gradient descent, which is more heavy weight. Further, pFuzzer uses a simple (mostly) monotonic increase in the input string length, which means that only the last few (often a single) characters need to be mutated for further exploration. This reduces the computational expenditure significantly. At the end of this comparison, let us point out that while each approach has their specific strengths and weaknesses, they may well complement each other. A pragmatic approach could be to start fuzzing with a fast lexical fuzzer such as AFL, continue with syntactic fuzzing such as pFuzzer, and solve remaining semantic constraints with symbolic analysis. 7 Limitations and Future Work While our approach surpasses the state of the art, it leaves lots of potential for further improvement, addressing the challenges of even more complex languages. Our future work will address the following topics: 7.1 Table-Driven Parsers Our current implementation is limited to recursive-descent parsers. The coverage metric will not work on table-driven parsers out of the box as such a parser defines its state based on the table it reads rather the code it is currently executing. This is an obvious, yet not severe limitation. First, the coverage metric still works as a general guidance—instead of code coverage, one could implement coverage of table elements. Thus, the general search heuristic would still work especially as the implicit paths and character comparisons do also exist in a table driven parser. Second, recursive descent parsers, especially those produced by hand are one of the most common types of parsers. A cursory look at the top 17 programming languages in Github [29] shows that 80% of them have recursive descent parsers. Finally, tables for a table driven parser are almost always generated using parser generators such as Yacc and Antlr. Hence, a grammar is already available, and one can thus apply grammar-based fuzzing for superior coverage of input features. Table driven parsers thus make a formidable challenge, but more from an intellectual viewpoint than a practical one. 7.2 Tokenization A second limitation is loss of taint information during tokenization. Programs that accept complex input languages often have a lexical phase where a light weight parser that accepts a regular language is used to identify logical boundaries in the incoming character stream, and split the character stream into tokens. This can happen either before the actual parsing happens or, as in tinyC and mjs, interleaved with the parsing, where the lexer is activated each time a new token is required by the parser. --- 8 Angora is unavailable at this time, and a number of its technical details can only be guessed at. The problem with tokenization is that tokens represent a break in data flow. For example, consider this fragment that generates a token LPAREN: ```java if (next_char() == '(') return LPAREN; ``` The token LPAREN is generated without a direct data flow from the character '(' to the token. As our prototype relies on direct taint flow, it is unable to correctly taint LPAREN. Our coverage metrics and other heuristics circumvent this problem to some extent. Each time a token is accepted that we have not seen before, a new branch is covered in the parser. Let us assume we already saw an opening parenthesis, e.g., in an input containing an if()-statement. If we now want to generate a while()-statement and already have the prefix while, putting an opening parenthesis after the while will not cover new branches in the tokenizer. Still, in the function parsing the while statement we would now see that the branch checking if the opening parenthesis is present is covered. Thus, based on the increased coverage, we would use the prefix while (for further exploration, and this is how rFuzzer can still generate complex inputs. We are currently investigating means to identify typical tokenization patterns to propagate taint information even in the presence of implicit data flow to tokens, such that we can recover the concrete character comparisons we need. ### 7.3 Semantic Restrictions Our approach focuses on syntactic properties. Still, many nontrivial input languages have semantic restrictions as well—in many programming languages, it is necessary to declare named resources before their use. These are context-sensitive features and are usually verified after the parsing phase. However, our technique has no notion of a delayed constraint. It assumes that if a character was accepted by the parser, the character is correct. Hence, the input generated, while it passes the parser, fails the semantic checks. This, to some extent, mirrors the difficulty with the lexing phase. Such semantic restrictions need to be learned throughout generation, and is one of the frontiers we want to tackle. ### 7.4 Recursion Most complex input languages contain recursive structures. While parser-directed fuzzing is a reasonable technique to explore comparatively short sequences of inputs, it is inefficient to use it beyond a shallow exploration of the input features without too much recursion. For generating larger sequences, it is more efficient to rely on parser-directed fuzzing for initial exploration, use a tool to mine the grammar from the resulting sequences, and use the mined grammar for generating longer and more complex sequences that contain recursive structures. The ability to mine grammars already exist [19], and its incorporation will increase the effectiveness of our tool chain. Indeed, the stumbling block in using a tool such as AutoGram right now is the lack of valid and diverse inputs. Using a human to produce these inputs runs the risk that a human being will only produce inputs with features that they think the program implements. However, such mental models are based on assumptions about the program which can be incomplete or incorrect even for comparatively simple programs. Using inputs thus produced runs the risk of developing blind spots on the inputs composed using grammar-based fuzzers. A similar problem occurs when one relies on pre-existing grammars (when available). These grammars often encode knowledge of a programs input at some point in time, and according to what an ideal program that implements the grammar should behave. Specifications can often change, and programs can incorporate new features not seen in the original grammar. Hence, parser-directed fuzzing, which does not rely on any such assumptions, fills an important place in automatic test generation. ### 8 Conclusion In a program, only valid inputs survive the parsing stage and are able to test the actual program functionality. Yet, generating valid test inputs for parsers is a challenge. "Lexical" approaches such as traditional fuzzing fail because of the sheer improbability to generate valid inputs and keywords (while being good at testing the input rejection capability of a program's parsing stage), whereas the symbolic constraint solving of "semantic" approaches fails due to combinatorial explosion of paths. With parser-directed fuzzing, we present the first approach that, given nothing but the parser at hand, infers and covers substantial parts of input languages—up to the complexity of real programming languages. Our approach relies on the key observation that most parsers process and compare a single character at a time. We use dynamic tainting to track the comparisons made with the last character; we fix the last character when the input is rejected; and we add new characters when the end of input is reached without error. These steps are sufficient to produce high quality inputs with tokens and keywords, outperforming state-of-the-art "lexical" fuzzers such as AFL or "semantic" fuzzers like KLEE. Right now, nontrivial input languages are the main roadblock for effective test generation at the system level. "Syntactic" parser-directed fuzzing thus has a large potential for the future of test generation. As more and more of its limitations will be lifted (Section 7), users will be able to generate syntactically valid inputs for a large class of programs—and thus easily reach, exercise, and test program functionality in a fully automatic fashion. Once we can synthesize the simplest Java program class C { public static void main(String args[]) { ... } } from a Java compiler, syntax-directed testing will have reached its full potential. A replication package is available at: https://github.com/uds-se/pFuzzer References
{"Source-Url": "https://publications.cispa.saarland/2823/1/pldi19main-p767-p-33025aa-40085-submitted.pdf", "len_cl100k_base": 12397, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 47347, "total-output-tokens": 15536, "length": "2e13", "weborganizer": {"__label__adult": 0.0003740787506103515, "__label__art_design": 0.00029277801513671875, "__label__crime_law": 0.00026535987854003906, "__label__education_jobs": 0.0004148483276367187, "__label__entertainment": 6.091594696044922e-05, "__label__fashion_beauty": 0.00014841556549072266, "__label__finance_business": 0.00013303756713867188, "__label__food_dining": 0.0003032684326171875, "__label__games": 0.0005450248718261719, "__label__hardware": 0.0006632804870605469, "__label__health": 0.0003709793090820313, "__label__history": 0.00017261505126953125, "__label__home_hobbies": 7.104873657226562e-05, "__label__industrial": 0.0002493858337402344, "__label__literature": 0.0002665519714355469, "__label__politics": 0.00019943714141845703, "__label__religion": 0.0004017353057861328, "__label__science_tech": 0.00826263427734375, "__label__social_life": 7.68899917602539e-05, "__label__software": 0.004482269287109375, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.000278472900390625, "__label__transportation": 0.00037026405334472656, "__label__travel": 0.00017023086547851562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64375, 0.04131]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64375, 0.40298]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64375, 0.90022]], "google_gemma-3-12b-it_contains_pii": [[0, 3913, false], [3913, 9555, null], [9555, 11624, null], [11624, 17618, null], [17618, 22109, null], [22109, 27640, null], [27640, 32652, null], [32652, 36315, null], [36315, 39741, null], [39741, 45397, null], [45397, 51211, null], [51211, 57016, null], [57016, 64375, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3913, true], [3913, 9555, null], [9555, 11624, null], [11624, 17618, null], [17618, 22109, null], [22109, 27640, null], [27640, 32652, null], [32652, 36315, null], [36315, 39741, null], [39741, 45397, null], [45397, 51211, null], [51211, 57016, null], [57016, 64375, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64375, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64375, null]], "pdf_page_numbers": [[0, 3913, 1], [3913, 9555, 2], [9555, 11624, 3], [11624, 17618, 4], [17618, 22109, 5], [22109, 27640, 6], [27640, 32652, 7], [32652, 36315, 8], [36315, 39741, 9], [39741, 45397, 10], [45397, 51211, 11], [51211, 57016, 12], [57016, 64375, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64375, 0.09841]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
6e2119d82a89adc1165db40e8e8df85f4653ce73
An environment for multicolumn output† Frank Mittelbach Abstract This article describes the use and the implementation of the multicol environment. This environment allows switching between one and multicolumn format on the same page. Footnotes are handled correctly (for the most part), but will be placed at the bottom of the page and not under each column. \TeX's float mechanism, however, is partly disabled in the current implementation and will be added in a later version. At the moment only floats contributed outside the scope of the environment will find their way into the actual output. 1 Preface to version 1.2 After the article about the multicol environment was published in TUGboat 10#3, I got numerous requests for these macros. However, I also got a changed version of my style file, together with a letter asking me if I would include the changes to get better paragraphing results in the case of narrow lines. The main differences to my original style option were additional parameters (like \multicoladjdemerits to be used for \adjdemerits, etc.) which would influence the line breaking algorithm. But actually resetting such parameters to zero or even worse to a negative value won't give better line breaks inside the multicol environment. \TeX's line breaking algorithm will only look at those possible line breaks which can be reached without a badness higher than the current value of \tolerance (or \pretolerance in the first pass). If this isn't possible, then, as a last resort, \TeX will produce overfull boxes. All those (and only those) possible break points will be considered and finally the sequence which results in the fewest demerits will be chosen. This means that a value of \-1000 for \adjdemerits instructs \TeX to prefer visibly incompatible lines instead of producing better line breaks. However, with \TeX 3.0 it is possible to get decent line breaks even in small columns by setting \emergencystretch to an appropriate value. I implemented a version which is capable of running both in the old and the new \TeX (actually it will simply ignore the new feature if it is not available). The calculation of \emergencystretch is probably incorrect. I made a few tests but of course one has much more experience with the new possibilities to achieve the maximum quality. Version 1.1a had a nice "feature": the penalty for using the forbidden floats was their ultimate removal from \TeX's \@freelist so that after a few \marginpars inside the multicol environment floats where disabled forever. (Thanks to Chris Rowley for pointing this out.) I removed this misbehaviour and at the same time decided to allow at least floats spanning all columns, e.g., generated by the \figure* environment. You can see the new functionality in table ?? which was inserted at this very point. However single column floats are still forbidden and I don't think I will have time to tackle this problem in the near future. As an advice for all who want to try: wait for \TeX 3.0. It has a few features which will make life much easier in multicolumn surroundings. Nevertheless we are working here at the edge of \TeX's capabilities, really perfect solutions would need a different approach than it was done in \TeX's page builder. The text below is nearly unchanged, I only added documentation at places where new code was added. 2 Introduction Switching between two column and one column layout is possible in \TeX, but every use of \twocolumn or \onecolumn starts a new page. Moreover, the last page of two column output isn't balanced and this often results in an empty, or nearly empty, right col- †Editor's note: This paper, with slight modification, is the basis for Mr. Mittelbach's citation as the Donald E. Knuth Scholar at the 1989 TUG Meeting. †This file has version number v1.3c, last revised 91/04/08, documentation dated 91/03/14. \setemergencystretch: This is a hook for people who like to play around. It is supposed to set the \emergencystretch register provided in the new TeX 3.0. The first argument is the number of columns and the second one is the current \hspace. At the moment the default definition is 4pt \times \#1, i.e. the \hspace isn't used at all. But maybe there are better formulae. \setfloatcmds: This is the hook for the experts who like to implement a full float mechanism for the multicol environment. The \# in the name should signal that this might not be easy. Table 1: The new commands of multicol.sty version 1.2. Both commands might be removed if good solutions to these open problems are found. I hope that these commands will prevent that nearly identical style files derived from this one are floating around. umn. When I started to write macros for doc.sty (see “The doc-Option”, TeXboat volume 10 #2, pp. 245–273) I thought that it would be nice to place the index on the same page as the bibliography. And balancing the last page would not only look better, it also would save space; provided of course that it is also possible to start the next article on the same page. Rewriting the index environment was comparatively easy, but the next goal, designing an environment which takes care of footnotes, floats etc., was a harder task. It took me a whole weekend\footnote{I started with the algorithm given in the TeXbook on page 417. Without this help a weekend would not have been enough.} to get together the few lines of code below and there is still a good chance that I missed something after all. Try it and, hopefully, enjoy it; and please direct bug reports and suggestions back to Mainz. 3 The User Interface To use the environment one simply says \begin{multicols}{\#1} \multicolumn{\text{text}}{\end{multicols} where \#1 is the required number of columns and \multicolumn may contain arbitrary \LaTeX{} commands, except that floats and marginpars are not allowed in the current implementation\footnote{This is dictated by lack of time. To implement floats one has to reimplement the whole \LaTeX{} output routine.}. As its first action, the multicol environment measures the current page to determine whether there is enough room for some portion of multicol text output. This is controlled by the \dimen variable \premulticol which can be changed by the user with ordinary \LaTeX{} commands. If the space is less than \premulticol, a new page is started. Otherwise, a \vskip of \multicolsep is added.\footnote{Actually the added space may be less because we use \advspace (see the \LaTeX{} manual for further information about this command).} When the end of the multicols environment is encountered, an analogous mechanism is employed, but now we test whether there is a space larger than \postmulticol available. Again we add \multicolsep or start a new page. It is often convenient to spread some text over all columns, just before the multicol output, without any page break in between. To achieve this the multifiles environment has an optional second argument which can be used for this purpose. For example, the text you are now reading was started with \begin{multicols}{3} \section{The User Interface} ... If such text is unusually long (or short) the value of \premulticol might need adjusting to prevent a bad page break. We therefore provide a third argument which can be used to overwrite the default value of \premulticol just for this occasion. Separation of columns with vertical rules is achieved by setting the parameter \colsep to some positive value. In this article a value of .4pt was used. Since narrow columns tend to need adjustments in interline spacing we also provide a (\skip) parameter called \multicolskip which is added to the \baselineskip parameter inside the multicols environment. Please use this parameter with care or leave it unset. \section{The User Interface} ... 3 I started with the algorithm given in the TeXbook on page 417. Without this help a weekend would not have been enough. 2 This is dictated by lack of time. To implement floats one has to reimplement the whole \LaTeX{} output routine. 3 Actually the added space may be less because we use \advspace (see the \LaTeX{} manual for further information about this command). alone; it is intended only for style file designers since even small changes might produce totally unexpected changes to your document. 3.1 Balancing Columns Besides the previously mentioned parameters, some others are provided to influence the layout of the columns generated. Paragraphing in \TeX{} is controlled by several parameters. One of the most important is called \texttt{\tolerance}: this controls the allowed ‘looseness’ (i.e. the amount of blank space between words). Its default value is 200 (the default \texttt{\sloppy} which is too small for narrow columns. On the other hand the \texttt{\sloppy} declaration (which sets \texttt{\tolerance} to 10000 = \infty) is too large, allowing really bad spacing.\footnote{Look at the next paragraph, it was set with the \texttt{\sloppy} declaration.} We therefore use a \texttt{\multicol\tolerance} parameter for the \texttt{\tolerance} value inside the multicol environment. Its default value is 9999 which is less than infinity but ‘bad’ enough for most paragraphs in a multicolumn environment. Changing its value should be done outside the multicol environment. Since \texttt{\tolerance} is set to \texttt{\multicol\tolerance} at the beginning of every multicol environment one can locally overwrite this default by assigning \texttt{\tolerance} = \texttt{\desired\value}. \texttt{\multicol\tolerance} generation of multicolumn output can be divided into two parts. In the first part we are collecting material for a page, shipping it out, collecting material for the next page, and so on. As a second step, balancing will be done when the end of the multicols environment is reached. In the first step \TeX{} might consider more material whilst finding the final columns than it actually use when shipping out the page. This might cause a problem if a footnote is encountered in the part of the input considered, but not used, on the current page. In this case the footnote might show up on the current page, while the footnotemark corresponding to this footnote might be set on the next one.\footnote{The reason behind this behavior is the asynchronous character of the \TeX{} \texttt{page\_builder}. However, this could be avoided by defining very complicated routines which don't use \TeX{} primitives like \texttt{\insert} but do everything by hand. This is clearly beyond the scope of a weekend problem.} Therefore the multicols environment gives a warning message\footnote{This message will be generated even if there are no footnotes in this part of the text.} whenever it is unable to use all the material considered so far. If you don’t use footnotes too often the chances of something actually going wrong are very slim, but if this happens you can help \TeX{} by using a \texttt{\pagebreak} command in the final document. Another way to influence the behavior of \TeX{} in this respect is given by the counter variable ‘collectmore’. If you use the \texttt{\setcounter} declaration to set this counter to \texttt{\number}, \TeX{} will consider \texttt{\number} more (or less) lines before making its final decision. So a value of \texttt{-1} may solve all your problems at the cost of slightly less optimal columns. In the second step (balancing columns) we have other bells and whistles. First of all you can say \texttt{\raggedcolumns} if you don’t want the bottom lines to be aligned. The default is \texttt{\flushcolumns}, so \TeX{} will normally try to make both the top and bottom baselines of all columns align. Additionally you can set another counter, the ‘unbalance’ counter, to some positive \texttt{\number}. This will make all but the right-most column \texttt{\number} of lines longer than they would normally have been. ‘Lines’ in this context refer to normal text lines (i.e. one \texttt{\baselineskip} apart); thus, if your columns contain displays, for example, you may need a higher \texttt{\number} to shift something from one column into another. Unlike ‘collectmore,’ the ‘unbalance’ counter is reset to zero at the end of the environment so it only applies to one multicols environment. The two methods may be combined but I suggest using these features only when fine tuning important publications. 3.2 Tracing the output To understand the reasoning behind the decisions \TeX{} makes when processing a multicols environment, a tracing mechanism is provided. If you set the counter ‘tracingmulticols’ to a positive \texttt{\number} you then will get some tracing information on the terminal and in the transcript file: \begin{verbatim} <number> = 1. \TeX{} will now tell you, whenever it enters or leaves a multicols environment, the number of columns it is working on and its decision about starting a new page before or after the environment. <number> = 2. In this case you also get information from the balancing routine: the heights tried for the left and right-most columns, information about shrinking if the \texttt{\raggedcolumns} declaration is in force and the value of the ‘unbalance’ counter if positive. \end{verbatim} \langle \text{number} \rangle \geq 3$. Setting \langle \text{number} \rangle to such a high value will additionally place an \texttt{\hrule} into your output, separating the part of text which had already been considered on the previous page from the rest. Clearly this setting should not be used for the final output. ## 4 The Implementation We are now switching to two-column output to show the abilities of this environment (and bad layout decisions). ### 4.1 Starting and Ending the \texttt{multicol} Environment As always we begin by identifying the latest version of this file on the VDU and in the transcript file but we abort if this file was already read in. \begin{verbatim} 1 \ifnum\mult@cols>0\relax 2 \typeout{Style option: \texttt{\multicol}} 3 \fileversion{space <\filedate> (PMI)} 4 \typeout{English documentation} 5 \spaces\spaces\spaces<\doctype> (PMI)} \end{verbatim} As mentioned before, the \texttt{multicol} environment has one mandatory argument (the number of columns) and up to two optional ones. We start by reading the number of columns into the \texttt{\col@number} register. \texttt{\def\mult@cols#1{\col@number#1}} If the user forgot the argument, \TeX will complain about a missing number at this point. The error recovery mechanism will then use zero, which isn’t a good choice in this case. So we should now test whether everything is okay. The minimum is two columns at the moment. \texttt{\def\mult@cols#1{\col@number#1\relax} 1 \ifnum\col@number<2\relax 2 \warning{Using \texttt{"\textbackslash col@number"} doesn’t seem a good idea.}--}>) 3 I therefore use two columns instead\%} 4 \col@number=2 \fi} Now we can safely look for the optional arguments. \texttt{\def\mult@cols#1{\ifnextchar[\mult@cols\mult@cols[]}} The \texttt{\mult@cols} macro grabs the first optional argument (if any) and looks for the second one. \texttt{\def\mult@cols#1{\ifnextchar[\mult@cols\mult@cols}} This argument should be a \langle \text{dimen} \rangle denoting the minimum free space needed on the current page to start the environment. If the user didn’t supply one, we use \texttt{\premult@cols} as a default. \texttt{\def\mult@cols#1{\ifnextchar[\mult@cols[\premult@cols]}} After removing all arguments from the input we are able to start with \texttt{\mult@cols}. First we look to see if statistics are requested: \texttt{\def\mult@cols#1{\ifnum\mult@cols>0\relax 1 \warning{Style option: \texttt{\multicol}} 2 \fileversion{space <\filedate> (PMI)} 3 \typeout{English documentation} 4 \spaces\spaces\spaces<\doctype> (PMI)} Then we measure the current page to see whether a useful portion of the \texttt{multicol} environment can be typeset. This routine might start a new page. \texttt{\ifnum\mult@cols>0\relax} Now we output the first argument and produce vertical space above the columns. (Note that this argument corresponds to the first optional argument of the \texttt{multicol} environment.) \texttt{\begin{verbatim} 1 \par\addvspace\multicolssep 2 \begin{group} 3 \prepare@multicols\ignorespaces} \end{verbatim} The \texttt{\enough@room} macro used above isn’t perfect but works reasonably well in this context. We measure the free space on the current page by subtracting \texttt{\pagetotal} from \texttt{\pagegoal}. This isn’t entirely correct since it doesn’t take the ‘shrinking’ (i.e., \texttt{\pageshrink}) into account. The ‘recent contribution list’ might be nonempty so we start with \texttt{\par} and an explicit \texttt{\penalty}. Actually, we use \texttt{\addpenalty} to ensure that a following \texttt{\addvspace} will ‘see’ the vertical space that might be present. \texttt{\begin{verbatim} 1 \enough@room#1{\par \addpenalty\texttt{\par}} 2 \pagefree \pagegoal 3 \advance \pagefree - \pagetotal} \end{verbatim} Now we test whether tracing information is required: \texttt{\begin{verbatim} 1 \ifnum \c@tracingmulticols>0\relax 2 \typeout{Current page: \texttt{\par}} 3 \message{\spaces\spaces goal height=\%} 4 \the\pagegoal: \texttt{used \the\pagetotal} 5 \space \rightarrow \free=\the\pagefree\%} \end{verbatim} \footnote{See the documentation of \texttt{\endmulticols} for further details.} 33 \typeout{\@spaces needed \the\@fil} 34 (for \string\@fil)\fi Our last action is to force a page break if there isn't enough room left. 35 \ifdim\page\@free<\@fil When preparing for multicolumn output several things must be done. First we remove everything from the 'current page' and save it in the box \partial@page. 36 \def\prepare@multicols{% We add an empty box to the main vertical list to ensure that we catch any insertions (held over or inserted at the top of the page). Otherwise it might happen that the \eject is discarded without calling the output routine. Inside the output routine we remove this box again. 37 \noindent\skip000\relax 38 \output{(\global\setbox\partial@page 39 \vbox{\movebox\@cmv 40 \setbox\@x@\lastbox 41 }\eject) Then we assign new values to \badness, \badness and \tolerance since it's rather hard for \TeX{} to produce 'good' paragraphs within narrow columns. 42 \badness10001 \badness5000 43 \tolerance\multicol@tolerance Since nearly always the first pass will fail we ignore it completely telling \TeX{} to hyphenate directly. 44 \pretolerance0 For use with the new \TeX{} we set \emergencystretch to \col@number \times 4pt. However this is only a guess so at the moment this is done in a macro \setemergencystretch which gets the current \hsize and the number of columns as arguments. Therefore users are able to figure out their own formula. 45 \setemergencystretch\col@number\hsize Another hook to allow people adding their own extensions without making a new style option is \set@floatcmds which handles any redefinitions of \LaTeX's internal float commands to work with the multicols environment. At the moment it is only used to redefine \@dblfloat and \end@dblfloat. 46 \set@floatcmds We also set the register \doublecol@number for later use. This register should contain \( \col@number \). 47 \doublecol@number\col@number 48 \multiply\doublecol@number\times\@fil Additionally, we advance \baselineskip by \multicol@baselineskip to allow corrections for narrow columns. 49 \advance\baselineskip\multicol@baselineskip The thing to do is to assign a new value to \vs{size}. \LaTeX{} maintains the free room on the page (i.e. the page height without the space for already contributed floats) in the register \col@room. We must subtract the height of \partial@page to put the actual free room into this variable. 50 \advance\col@room\ht\partial@page Since we have to set \col@number columns on one page, each with a height of \col@room, we have to assign \vs{size} = \col@number \times \col@room in order to collect enough material before entering the \output routine again. 51 \vs{size}\col@number\col@room But this might not be enough since we use \vs{split} later to extract the columns from the gathered material. Therefore we add some 'extra lines,' the number depending on the value of the 'collectmore' counter. 52 \advance\vs{size}\collectmore\baselineskip The \hsize of the columns is given by the formula: \[ \frac{\text{columnwidth} - (\col@number - 1) \times \text{columnsep}}{\col@number} \] This will be achieved with: 53 \hsize\col@number\advance\hsize\vs{size}\columnsep 54 \advance\hsize\col@number\columnsep 55 \divide\hsize\col@number We also set \linewidth to \hsize but leave \columnwidth unchanged. This is inconsistent, but \columnwidth is used only by floats (which aren't allowed in their current implementation) and by the \footnote macro. Since we want pagewise footnotes\footnote{I'm not sure that I really want pagewise footnotes. But balancing of the last page can only be achieved with this approach or with a multi-path algorithm which is complicated and slow. But it's a challenge to everybody to prove me wrong! Another possibility is to reimplement a small part of the \texttt{fine_up} procedure in \TeX{} (the program). I think that this is the best solution if you are interested in complex page makeup, but it has the disadvantage that the resulting program cannot be called \TeX{} thereafter.} this simple trick saves us from rewriting the \footnote macros. 56 \linewidth\hsize Now we switch to a new \output routine which will be used to put the gathered column material together. \output{\multicolumnout}% Finally we handle the footnote insertions. We have to multiply the magnification factor and the extra skip by the number of columns since each footnote reduces the space for every column (remember that we have pagewise footnotes). If, on the other hand, footnotes are typeset at the very end of the document, our scheme still works since \count\footins is zero then, so it will not change. \multiply\count\footins\colnumber \multiply\skip\footins\colnumber For the same reason (pagewise footnotes), the \dimen register controlling the maximum space used for footnotes isn’t changed. Having done this, we must reinsert all the footnotes which are already present (i.e. those encountered when the material saved in \partial@page was first processed). This will reduce the free space (i.e. \pagetotal) by the appropriate amount since we have changed the magnification factor, etc. above. \reinsert@footnotes When the end of the \multicols environment is sensed we have to balance the material. We end the current paragraph with \par but this isn’t sufficient since \TeX’s \pagebuilder will not totally empty the contribution list.\footnote{This once caused a puzzling bug where some of the material was balanced twice, resulting in some overprints. The reason was the \eject which was placed at the end of the contribution list. Then the \pagebuilder was called (an explicit \penalty will empty the contribution list), but the line with the \eject didn’t fit onto the current page. It was then reconsidered after the output routine had ended, causing a second break after one line.} Therefore we must also add an explicit \penalty. Now the contribution list will be emptied and, if its material doesn’t all fit onto the current page then the output routine will be called before we change it. \edef\endmulticols{\par\addpenalty\z@} Now it’s safe to change the output routine in order to balance the columns. \output{\balance@columns}@eject The output routine above will take care of the \vsize and reinsert the balanced columns, etc. But it can’t reinsert the \footnotes because we first have to restore the \footins parameter since we are returning to one column mode. This will be done in the next line of code; we simply close the group started in \multicols. To fix an obscure bug which is the result of the current definition of the \begin ... \end macros, we check that we are still (logically speaking) in the \multicols environment. If, for example, we forget to close some environment inside the \multicols environment, the following \endgroup would be incorrectly considered to be the closing of this environment. \checkend{\multicols}% \endgroup \reinsert@footnotes We also set the ‘unbalance’ counter to its default. This is done globally since \TeX counters are always changed this way.\footnote{Actually, we are still in a group started by the \begin macro, so \global must be used anyway.} \global\c@unbalance\z@%} \typeout{""Ending multicolumn\n output.""":""J:\"fi} Let us end this section by allocating all the registers used so far. \newcount\c@unbalance \c@unbalance = 0 \newcount\c@collectmore \c@collectmore = 0 \newcount\c@tracingmulticols \c@tracingmulticols = 0 \newcount\colnumber \newcount\doublecolnumber \newcount\multicoltolerance \multicoltolerance = 9999 \newdimen\page@free \newdimen\premulticols \premulticols = 50pt \newdimen\postmulticols \postmulticols = 20pt \newskip\multicolsep \multicolsep = 12pt plus 4pt minus 3pt \newskip\multicolskip \multicolskip = 0pt We also need a box into which the “current page” can be put. \newbox\partial@page 4.2 The output routines We first start with some simple macros. When typesetting the page we save the columns either in the box registers 0, 2, 4, ... (locally) or 1, 3, 5, ... (globally). This is \textsc{Plain TeX} policy to avoid an overflow of the save stack. Therefore we define a \texttt{process@cols} macro to help us in using these registers in the output routines below. It has two arguments: the first one is a number; the second one is the processing information. It loops starting with \texttt{\count0=1} (\texttt{\count0} is a scratch register defined in \textsc{Plain TeX}), processes argument \#2, adds two to \texttt{\count0}, processes argument \#2 again, etc. until \texttt{\count0} is higher than \texttt{\doublecolnumber}. It might be easier to understand it through an example, so we first define it and explain its usage afterwards. ```latex \def\process@cols#2\relax{ \loop #2 \advance\count0\count@\relax \ifnum\count0<\doublecolnumber \repeat } ``` We now define \texttt{page@sofar} to give an example of the \texttt{process@cols} macro. \texttt{page@sofar} should output everything on the 'current page'. So we start by unboxing \texttt{\partial@page} (i.e. the part above the multicol environment). If the \texttt{\partial@page} is void (i.e. if the multicol environment started on a new page or if we typeset several pages within the multicol environment) this will produce nothing. ```latex \def\page@sofar\unbox\partial@page ``` Now we output the columns gathered assuming that they are saved in the box registers 2 (left column), 4 (second column), ... However, the last column (i.e. the right-most) should be saved in box register 0.\footnote{You will see the reason for this numbering when we look at the output routines \texttt{\multicolumn} and \texttt{\balance@column}.} First we ensure that the columns have equal width. We use \texttt{process@cols} for this purpose, starting with \texttt{\count0 = 0}. Therefore \texttt{\count0} loops through 0, 2, ... (to \texttt{\doublecolnumber}). ```latex \def\process@cols\relax#2\times\wd\count@\hspace{#2}\relax {\loop \process@cols\relax#2\relax \advance\count0\count@\relax \ifnum\count0<\doublecolnumber \repeat } ``` Now we put all columns together in an \texttt{\hbox} of width \texttt{\textwidth}, separating them with a rule if desired. ```latex \hbox to\textwidth \hspace{#2} ``` As you will have noticed, we started with box register 2 (i.e. the left column). So this time \texttt{\count0} looped through 2, 4, ... Finally we add box 0 and close the \texttt{\hbox}. ```latex 97 \box\z\} ``` Before we tackle the bigger output routines we define just one more macro which will help us to find our way through the mysteries later. \texttt{\reinsert@footnotes} will do what its name indicates: it reinserts the footnotes present in \texttt{footinbox} so that they will be reprocessed by \textsc{TeX}'s \texttt{page builder}. Instead of actually reinserting the footnotes we insert an empty footnote. This will trigger should insertion mechanism as well and since the old footnotes are their box and we are on a fresh page \texttt{\skipfootins should be correctly taken into account.} ```latex 98 \def\reinsert@footnotes{\ifvoid\footins\else \insert\footins\fi} ``` Now we can't postpone the difficulties any longer. The \texttt{\multicolumn} routine will be called in two situations. Either the page is full (i.e. we have collected enough material to generate all the required columns) or a float or marginpar is sensed. In the latter case the \texttt{\outputpenalty} is less than \texttt{-1000}, otherwise the penalty which triggered the output routine is higher. Therefore it's easy to distinguish both cases: we simply test this register. ```latex \def\multicolumn\relax{ \ifnum\outputpenalty<0\omit ``` If this was a float or a marginpar we call \texttt{\spec@ls} ```latex \omit\spec@ls\else ``` otherwise we contract the final page. Actually a \texttt{\clearpage} will be silently accepted, producing the same effects as a \texttt{\newpage}, since we didn't distinguish between a penalty of \texttt{-10000} and \texttt{-10001} (produced by a \texttt{\clearpage}). Let us now consider the normal case. We have to \texttt{\colorbox} the columns from the accumulated material in box 255. Therefore we first assign appropriate values to \texttt{\splittopskip} and \texttt{\splitmaxdepth}. ```latex 103 \splittopskip\opkip 104 \splitmaxdepth\maxdepth ``` Then we calculate the current column height (in \texttt{\dimen}). Note that the height of \texttt{\partial@page} is already subtracted from \texttt{\colroom} so we can use its value as a starter. ```latex 105 \dimen0\colroom ``` \footnote{You will see the reason for this numbering when we look at the output routines \texttt{\multicolumn} and \texttt{\balance@column}.} But we must also subtract the space occupied by footnotes on the current page. Note that we first have to reset the skip register to its normal value. \divide\skip\footins\colnumber \ifvoid\footins \else \advance\dimen0\textwidth \advance\dimen0=ht\footins \fi Now we are able to \vsplit off all but the last column. Recall that these columns should be saved in the box registers 2, 4,... \process\col\tw \setbox\count0 \vsplit\cclv to\dimen0 If \raggedcolumns is in force we add a \vfill at the bottom by unboxing the splitted box. \ifshr\king \setbox\count0\vbox to\dimen0 {\unvbox\count0\vfill} \fi \% Then the last column follows. \setbox\z@\vsplit\cclv to\dimen0 \ifshr\king \setbox\z@\vbox to\dimen0 {\unvbox\z@\vfill} \fi Having this done we hope that box 255 is emptied. If not, we reinsert its contents. \ifvoid\cclv \else \unvbox\cclv \fi \penalty\outputpenalty In this case a footnote that happens to fall into the leftover bit will be typeset on the wrong page. Therefore we warn the user if the current page contains footnotes. The older versions of \multicol produced this warning regardless of whether or not footnotes were present, resulting in many unnecessary warnings. \ifvoid\footins\else \warning{I moved some lines to the next page.} \fi \spaces\footnotes on page \the\page\space might be wrong\fi \% If the 'tracingmulticol' counter is 3 or higher we also add a rule. \ifnum \c@tracingmulticol\tw0 \hrule \allowbreak \fi \fi With a little more effort we could have done better. If we had, for example, recorded the shrinkage of the material in \partialpage it would be now possible to try higher values for \dimen0 (i.e. the column height) to overcome the problem with the nonempty box 255. But this would make the code even more complex so I skipped it in the current implementation. Now we use \LaTeX's standard output mechanism.\footnote{This will produce a lot of overhead since both output routines are held in memory. The correct solution would be to redesign the whole output routine used in \LaTeX.} Admittedly this is a funny way to do it. \makecol\outputpage The macro \makecol adds all floats assigned for the current page to this page. \outputpage ships out the resulting box. Note that it is just possible that such floats are present even if we do not allow any inside a \multicol environment. \makecol\outputpage Now we reset \colroom to \colht which is \LaTeX's saved value of \textheight. \global\colroom=\colht Then we process deferred floats waiting for their chance to be placed on the next page. \process\deferreds If the user is interested in statistics we inform him about the amount of space reserved for floats. \ifnum\c@tracingmulticol=0 \typout{Colroom: the\colht\space after float space removed = \the\colroom} \fi Having done all this we must prepare to tackle the next page. Therefore we assign a new value to \vsize. New, because \partialpage is now empty and \colroom might be reduced by the space reserved for floats. \global\vsize=\colnumber\colroom \global\advance\vsize \c@collectmore=\baselineskip The \footins skip register will be adjusted when the output group is closed. \fi We left out two macros: \process\deferreds and \specious. If we encounter a float or a marginpar in the current implementation we simply warn the user that this is not allowed. Then we reinsert the page and its footnotes. \def\specious\% \typout{Floats and marginpars not allowed inside `multicol' environment) \unbox\@cvl\reinsert\@footnotes Additionally we empty the \@currlist to avoid later error messages when the \LaTeX output routine is again in force. But first we have to place the boxes back onto the \@freeplist. (\@elts default is \relax so this is possible with \def.) \def\@freeplist{\@freeplist\@currlist} \def\@currlist{} \process@deferreds is a simplified version of \LaTeX's \@startpage. We first call the macro \@floatplacement to save the current user parameters in internal registers. Then we start a new group and save the \@deferlist temporarily in the macro \@tempb. \def\process@deferreds{% \@floatplacement \begingroup \let\@tempb\@deferlist Our next action is to (globally) empty \@deferlist and assign a new meaning to \@elt. Here \@scolelt is a macro that looks at the boxes in a list to decide whether they should be placed on the next page (i.e. on \@toplist or \@botlist) or should wait for further processing. \def\@deferlist{% \let\@elt\@scolelt Now we call \@tempb which has the form \@elt{\texttt{box register}}\@elt{ }box register... So \@elt (i.e. \@scolelt) will distribute the boxes to the three lists. \begin{group} The \raggedcolumns and \flushcolumns declarations are defined with the help of a new \if... \newif\ifshrink The actual definitions are simple: we just switch to true or false depending on the desired action. To avoid extra spaces in the output we enclose these changes in \@bsphack\@ids{\allowbreak\verb}@esphack. \def\raggedcolumns{% \@bsphack\shrinktrue@esphack \def\flushcolumns{% \@bsphack\shrinkfalse@esphack Now for the last part of the show: the column balancing output routine. Since this code is called with an explicit penalty (\@eject) there is no need to check for something special. Therefore we start by assigning the values used by \@split. \def\@balance{columns}% \@splittopskip\@opkip \def\@maxdepth{maxdepth} \def\@maxdepth{maxdepth} Next we measure the length of the current page and at the same time save it in box register 0. \setbox0\box{\unbox\@cvl\dimen@ht\@x} Then we try to find a suitable starting point for the calculation of the column height. It should be less than the height finally chosen, but large enough to reach this final value in only a few iterations. \advance\dimen@colnumber\@opkip \advance\dimen@colnumber\baselineskip \divide\dimen@colnumber At the user's request we start with a higher value (or lower, but this usually only increases the number of tries). \advance\dimen@c@unbalance\baselineskip We type out statistics if we were asked to do so. \ifmmode\set@tracingmulticols@end \typeout{Balance columns: \ifmmode\set@unbalance=\@z\else (off balance=\scol@unbalance)\fi%} \fi Now we try to find the final column height. We start by setting \@badness to infinity (i.e. 10000) to suppress underfull box reports while we are trying to find an acceptable solution. We do not need to do it in a group since at the end of the output routine everything will be restored. The setting of the final columns will nearly always produce underfull boxes with badness 10000 so there is no point in warning the user about it. \@badness@\@in In order not to clutter up \LaTeX's valuable main memory with things that are no longer needed, we empty all globally used box registers. This is necessary if we return to this point after an unsuccessful trial. We use \process@cols for this purpose, starting with 1. Note the extra braces around this macro call. They are needed since \LaTeX's \loop+\@ids{}\allowbreak\verb+\repeate mechanism cannot be nested on the same level of grouping. {\process@cols}@one{\global\setbox\count@ \box\voidb@x}{} The contents of box 0 are now copied globally to box 1. (This will be the right-most column, as we shall see later.) \global\setbox\one@copy\@z Using \vspli t we extract the other columns from box register 1. This leaves box register 0 untouched so that we can start over again if this trial was unsuccessful. After \process@cols has done its job we have the following situation: - box 0 ← all material - box 3 ← first column - box 5 ← second column ... - box 1 ← last column We report the height of the first column. \ifnum\c@tracingmulticol=1 \message{\spaces First column = \the\ht\thr@@}\fi If \raggedcolumns is in force we also shrink the first column to its natural height and optionally inform the user. \ifhmode \message{after shrinking \the\ht\thr@@}\fi Then we give information about the last column. \ifnum\c@tracingmulticol=1 \message{< last column = \the\ht\one}\fi We check whether our trial was successful. The test used is very simple: we merely compare the first and the last column. Thus the intermediate columns may be longer than the first if \raggedcolumns is used. If the right-most column is longer than the first then we start over with a larger value for \dimen@. \ifdim\ht\one >\ht\thr@@ \advance\dimen@ by\fi Now we save the actual height of box register 3 (i.e. the left column) in the \dimen register \dimen@ since otherwise this information will be lost when processing the code below.\footnote{The value of \dimen@ may differ from the height of box register 3 when we use the \raggedcolumns declaration.} \dimen@ =\ht\thr@@ If the determined height for the columns turns out to be larger than the available space (which is \@colroom) we squeeze the columns into the space assuming that they will have enough shrinkability to allow this.\footnote{This might be wrong, since the shrinkability that accounts for the amount of material might be present only in some columns. But it is better to try then to give up directly.} \ifdim\dimen@ <\@colroom \dimen@ =\@colroom \fi Then we move the contents of the odd-numbered box registers to the even-numbered ones, shrinking them if requested. We have to use \vbox not \vtop (as it was done in the first versions) since otherwise the resulting boxes will have no height (The \TeXbook page 81). This would mean that extra \topskip is added when the boxes are returned to the pagebuilder via \page@sofar. \ifnum\c@tracingmulticol=1 \message{\spaces First column = \the\ht\thr@@}\fi \ifhmode \message{after shrinking \the\ht\thr@@}\fi This will bring us into the position to apply \page@sofar. But first we have to set \vsiz e to a value suitable for one column output. \global\vsize =\@colroom \global\advance\vsize by\ht\partial@page \page@sofar As we already know, reinserting of footnotes will be done in the macro \endmulticol s. 5 New macros and hacks for version 1.2 If we don't use \TeX 3.0 \emergencystretc h is undefined so in this case we simply add it as an unused \dimen register. \ifdefined\emergencystretc h {\newdimen\emergencystretc h}\else \emergencystretc h 1pt My tests showed that the following formula worked pretty well. Nevertheless the \setemergencystretc h macro also gets \vsize as second argument to enable the user to try different formulae. \def\setemergencystretc h#1#2{\vsize\setemergencystretc h\dimen@\advance\dimen@ by#1\divide\dimen@ by#2} \def\setemergencystretc h 4pt \def\setemergencystretc h 1pt Even if this should be used as a hook we use a \the\dimen in the name since it is more for experts. \let\dblfloat\@dblfloat This is cheap (deferring the floats until after the current page) but any other solution would go deep into \LaTeX's output routine and I don't like to work on it until I know which parts of the output routine have to be reimplemented anyway for 2.10 and 3.0. We have to add the float to the \texttt{@deferlist} because we assume that outside the \texttt{multicols} environment we are in one column mode. This is not entirely correct, I already used the \texttt{multicols} environment inside of \LaTeX's \texttt{twocolumn} declaration but it will do for most applications. Frank Mittelbach Eichenweg 29 D-6500 Mainz 1 Federal Republic of Germany Bitnet: \texttt{pf55dez@druede}
{"Source-Url": "http://www.uic.edu:80/depts/accc/software/tex/multicol.pdf", "len_cl100k_base": 10728, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 60808, "total-output-tokens": 11698, "length": "2e13", "weborganizer": {"__label__adult": 0.00023257732391357425, "__label__art_design": 0.0016069412231445312, "__label__crime_law": 0.00017976760864257812, "__label__education_jobs": 0.0011911392211914062, "__label__entertainment": 0.00015020370483398438, "__label__fashion_beauty": 0.00012409687042236328, "__label__finance_business": 0.00026798248291015625, "__label__food_dining": 0.00019216537475585935, "__label__games": 0.0008320808410644531, "__label__hardware": 0.0007328987121582031, "__label__health": 0.0001533031463623047, "__label__history": 0.0003325939178466797, "__label__home_hobbies": 0.0001938343048095703, "__label__industrial": 0.0003535747528076172, "__label__literature": 0.0004496574401855469, "__label__politics": 0.00016069412231445312, "__label__religion": 0.0003902912139892578, "__label__science_tech": 0.0257568359375, "__label__social_life": 0.00012445449829101562, "__label__software": 0.0927734375, "__label__software_dev": 0.87353515625, "__label__sports_fitness": 0.00015282630920410156, "__label__transportation": 0.00016760826110839844, "__label__travel": 0.00017511844635009766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41523, 0.01963]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41523, 0.32017]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41523, 0.85272]], "google_gemma-3-12b-it_contains_pii": [[0, 3876, false], [3876, 8185, null], [8185, 13225, null], [13225, 17387, null], [17387, 21464, null], [21464, 25192, null], [25192, 30050, null], [30050, 33501, null], [33501, 37429, null], [37429, 40844, null], [40844, 41523, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3876, true], [3876, 8185, null], [8185, 13225, null], [13225, 17387, null], [17387, 21464, null], [21464, 25192, null], [25192, 30050, null], [30050, 33501, null], [33501, 37429, null], [37429, 40844, null], [40844, 41523, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41523, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41523, null]], "pdf_page_numbers": [[0, 3876, 1], [3876, 8185, 2], [8185, 13225, 3], [13225, 17387, 4], [17387, 21464, 5], [21464, 25192, 6], [25192, 30050, 7], [30050, 33501, 8], [33501, 37429, 9], [37429, 40844, 10], [40844, 41523, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41523, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
750f2803c7e3c6c5a36d344b75aa431a0d291a2a
The expl3 package and \LaTeX3 programming The \LaTeX3 Project\footnote{E-mail: latex-team@latex-project.org} Released 2022-08-05 Abstract This document gives an introduction to a new set of programming conventions that have been designed to meet the requirements of implementing large scale \TeX macro programming projects such as \LaTeX. These programming conventions are the base layer of \LaTeX3. The main features of the system described are: - classification of the macros (or, in \LaTeX terminology, commands) into \LaTeX functions and \LaTeX parameters, and also into modules containing related commands; - a systematic naming scheme based on these classifications; - a simple mechanism for controlling the expansion of a function’s arguments. This system is being used as the basis for \TeX programming within The \LaTeX Project. Note that the language is not intended for either document mark-up or style specification. Instead, it is intended that such features will be built on top of the conventions described here. This document is an introduction to the ideas behind the expl3 programming interface. For the complete documentation of the programming layer provided by The \LaTeX Project, see the accompanying interface3 document. 1 Introduction The first step to develop a \LaTeX kernel beyond \LaTeX2ε is to address how the underlying system is programmed. Rather than the current mix of \LaTeX and \TeX macros, the \LaTeX3 system provides its own consistent interface to all of the functions needed to control \TeX. A key part of this work is to ensure that everything is documented, so that \LaTeX programmers and users can work efficiently without needing to be familiar with the internal nature of the kernel or with plain \TeX. The expl3 bundle provides this new programming interface for \LaTeX. To make programming systematic, \LaTeX3 uses some very different conventions to \LaTeX2ε or plain \TeX. As a result, programmers starting with \LaTeX3 need to become familiar with the syntax of the new language. The next section shows where this language fits into a complete \TeX-based document processing system. We then describe the major features of the syntactic structure of command names, including the argument specification syntax used in function names. The practical ideas behind this argument syntax will be explained, together with the expansion control mechanism and the interface used to define variant forms of functions. As we shall demonstrate, the use of a structured naming scheme and of variant forms for functions greatly improves the readability of the code and hence also its reliability. Moreover, experience has shown that the longer command names which result from the new syntax do not make the process of writing code significantly harder. 2 Languages and interfaces It is possible to identify several distinct languages related to the various interfaces that are needed in a \TeX-based document processing system. This section looks at those we consider most important for the \LaTeX3 system. Document mark-up This comprises those commands (often called tags) that are to embedded in the document (the .tex file). It is generally accepted that such mark-up should be essentially declarative. It may be traditional \TeX-based mark-up such as \LaTeX2\epsilon, as described in [3] and [2], or a mark-up language defined via HTML or XML. One problem with more traditional \TeX coding conventions (as described in [1]) is that the names and syntax of \TeX’s primitive formatting commands are ingeniously designed to be “natural” when used directly by the author as document mark-up or in macros. Ironically, the ubiquity (and widely recognised superiority) of logical mark-up has meant that such explicit formatting commands are almost never needed in documents or in author-defined macros. Thus they are used almost exclusively by \TeX programmers to define higher-level commands, and their idiosyncratic syntax is not at all popular with this community. Moreover, many of them have names that could be very useful as document mark-up tags were they not pre-empted as primitives (e.g. \texttt{\box} or \texttt{\special}). Designer interface This relates a (human) typographic designer’s specification for a document to a program that “formats the document”. It should ideally use a declarative language that facilitates expression of the relationship and spacing rules specified for the layout of the various document elements. This language is not embedded in document text and it will be very different in form to the document mark-up language. For \LaTeX, this level was almost completely missing from \LaTeX2.09; \LaTeX2\epsilon made some improvements in this area but it is still the case that implementing a design specification in \LaTeX requires far more “low-level” coding than is acceptable. Programmer interface This language is the implementation language within which the basic typesetting functionality is implemented, building upon the primitives of \TeX (or a successor program). It may also be used to implement the previous two languages “within” \TeX, as in the current \LaTeX system. The last layer is covered by the conventions described in this document, which describes a system aimed at providing a suitable basis for coding \LaTeX3. Its main distinguishing features are summarised here: - A consistent naming scheme for all commands, including \TeX primitives. - The classification of commands as \LaTeX functions or \LaTeX parameters, and also their division into modules according to their functionality. • A simple mechanism for controlling argument expansion. • Provision of a set of core \( \LaTeX \) functions that is sufficient for handling programming constructs such as queues, sets, stacks, property lists. • A \( \TeX \) programming environment in which, for example, all white space is ignored. 3 The naming scheme \( \LaTeX \) does not use @ as a “letter” for defining internal macros. Instead, the symbols \_ and : are used in internal macro names to provide structure. In contrast to the plain \( \TeX \) format and the \( \LaTeX \) \( \epsilon \) kernel, these extra letters are used only between parts of a macro name (no strange vowel replacement). While \( \TeX \) is actually a macro processor, by convention for the expl3 programming language we distinguish between \textit{functions} and \textit{variables}. Functions can have arguments and they are either expanded or executed. Variables can be assigned values and they are used in arguments to functions; they are not used directly but are manipulated by functions (including getting and setting functions). Functions and variables with a related functionality (for example accessing counters, or manipulating token lists, \textit{etc.}) are collected together into a \textit{module}. 3.1 Examples Before giving the details of the naming scheme, here are a few typical examples to indicate the flavour of the scheme; first some variable names. \texttt{\l_tmpa_box} is a local variable (hence the \texttt{l_} prefix) corresponding to a box register. \texttt{\g_tmpa_int} is a global variable (hence the \texttt{g_} prefix) corresponding to an integer register (i.e. a \TeX count register). \texttt{\c_empty_tl} is the constant (c_) token list variable that is always empty. Now here is an example of a typical function name. \texttt{\seq_push:Nn} is the function which puts the token list specified by its second argument onto the stack specified by its first argument. The different natures of the two arguments are indicated by the \texttt{Nn} suffix. The first argument must be a single token which “names” the stack parameter: such single-token arguments are denoted \texttt{N}. The second argument is a normal \TeX “undelimited argument”, which may either be a single token or a balanced, brace-delimited token list (which we shall here call a \textit{braced token list}): the \texttt{n} denotes such a “normal” argument form. The name of the function indicates it belongs to the \texttt{seq} module. 3.2 Formal naming syntax We shall now look in more detail at the syntax of these names. A function name in \( \LaTeX \) has a name consisting of three parts: \texttt{\langle module\_\_description\_\_arg-spec\rangle} while a variable has (up to) four distinct parts to its name: \texttt{\langle scope\_\_module\_\_description\_\_type\rangle} The syntax of all names contains \begin{itemize} \item \texttt{module}\end{itemize} and \texttt{description} these both give information about the command. A \textit{module} is a collection of closely related functions and variables. Typical module names include \texttt{int} for integer parameters and related functions, \texttt{seq} for sequences and \texttt{box} for boxes. Packages providing new programming functionality will add new modules as needed; the programmer can choose any unused name, consisting of letters only, for a module. In general, the module name and module prefix should be related: for example, the kernel module containing \texttt{box} functions is called \texttt{l3box}. Module names and programmers’ contact details are listed in \texttt{l3prefixes.csv}. The \texttt{description} gives more detailed information about the function or parameter, and provides a unique name for it. It should consist of letters and, possibly, \texttt{_} characters. In general, the description should use \texttt{_} to divide up “words” or other easy to follow parts of the name. For example, the \LaTeX3 kernel provides \texttt{\if_cs_exist:N} which, as might be expected, tests if a command name exists. Where functions for variable manipulation can perform assignments either locally or globally, the latter case is indicated by the inclusion of a \texttt{g} in the second part of the function name. Thus \texttt{\tl_set:Nn} is a local function but \texttt{\tl_gset:Nn} acts globally. Functions of this type are always documented together, and the scope of action may therefore be inferred from the presence or absence of a \texttt{g}. See the next subsection for more detail on variable scope. \subsection*{3.2.1 Separating private and public material} One of the issues with the \TeX language is that it doesn’t support name spaces and encapsulation other than by convention. As a result nearly every internal command in the \LaTeX\$\varepsilon$ kernel has eventually be used by extension packages as an entry point for modifications or extensions. The consequences of this is that nowadays it is next to impossible to change anything in the \LaTeX$\varepsilon$ kernel (even if it is clearly just an internal command) without breaking something. In expl3 we hope to improve this situation drastically by clearly separating pub- ic interfaces (that extension packages can use and rely on) and private functions and variables (that should not appear outside of their module). There is (nearly) no way to enforce this without severe computing overhead, so we implement it only through a naming convention, and some support mechanisms. However, we think that this naming convention is easy to understand and to follow, so that we are confident that this will adopted and provides the desired results. Functions created by a module may either be “public” (documented with a defined interface) or “private” (to be used only within that module, and thus not formally doc- umented). It is important that only documented interfaces are used; at the same time, it is necessary to show within the name of a function or variable whether it is public or private. To allow clear separation of these two cases, the following convention is used. Private functions should be defined with \texttt{--} added to the beginning of the module name. Thus \begin{verbatim} \module_foo:nnn \end{verbatim} is a public function which should be documented while is private to the module, and should not be used outside of that module. In the same way, private variables should use two \_\_ at the start of the module name, such that \l\_module\_foo\_tl is a public variable and \l\_\_module\_foo\_tl is private. ### 3.2.2 Using @@ and l3docstrip to mark private code The formal syntax for internal functions allows clear separation of public and private code, but includes redundant information (every internal function or variable includes \_\_⟨module⟩). To aid programmers, the l3docstrip program introduces the syntax \%<@@=⟨module⟩> which then allows @@ (and _@@ in case of variables) to be used as a place holder for \_\_⟨module⟩ in code. Thus for example \%<@@=foo> \% \begin{macrocode} \cs_new:Npn \@@_function:n #1 ... \tl_new:N \l_@@_my_tl \% \end{macrocode} is converted by l3docstrip to \cs_new:Npn \_\_foo_function:n #1 ... \tl_new:N \l_\_\_foo_my_tl on extraction. As you can see both _@@ and @@ are mapped to \_\_⟨module⟩, because we think that this helps to distinguish variables from functions in the source when the @@ convention is used. Please note that you have to use the l3docstrip and not the docstrip program in your .ins files to make this work—the original \LaTeX{} docstrip doesn’t understand the @@ and will just copy it into your code unmodified! ### 3.2.3 Variables: declaration In well-formed expl3 code, variables should always be declared before assignment is attempted. This is true even for variable types where the underlying TeX implementation will allow direct assignment. This applies both to setting directly (\tl_set:Nn, etc.) and to setting equal (\tl_set_eq:NN, etc.). To help programmers to adhere to this approach, the debugging option check-declarations may be given \debug_on:n { check-declarations } and will issue an error whenever an assignment is made to a non-declared variable. There is a performance implication, so this option should only be used for testing. 3.2.4 Variables: scope and type The ⟨scope⟩ part of the name describes how the variable can be accessed. Variables are classified as local, global or constant. This scope type appears as a code at the beginning of the name; the codes used are: - **c** constants (global variables whose value should not be changed); - **g** variables whose value should only be set globally; - **l** variables whose value should only be set locally. Separate functions are provided to assign data to local and global variables; for example, \texttt{\tl_set:Nn} and \texttt{\tl_gset:Nn} respectively set the value of a local or global “token list” variable. Note that it is a poor \TeX{} practice to intermix local and global assignments to a variable; otherwise you risk exhausting the save stack.\footnote{See \textit{The \TeX{}book}, p. 301, for further information.} The ⟨type⟩ is in the list of available \textit{data-types};\footnote{Of course, if a totally new \textit{data type} is needed then this will not be the case. However, it is hoped that only the kernel team will need to create new \textit{data types}.} these include the primitive \TeX{} \textit{data-types}, such as the various registers, but to these are added \textit{data-types} built within the \LaTeX{} programming system. The \textit{data types} in \LaTeX{} are: - **bool** either true or false (the \LaTeX{} implementation does not use \texttt{\iftrue} or \texttt{\iffalse}); - **box** box register; - **cctab** category code table; - **clist** comma separated list; - **coffin** a “box with handles” — a higher-level \textit{data type} for carrying out \textit{box} alignment operations; - **dim** “rigid” lengths; - **fp** floating-point values; - **ior** an input stream (for reading from a file); - **iow** an output stream (for writing to a file); - **int** integer-valued count register; - **muskip** math mode “rubber” lengths; - **prop** property list; - **seq** sequence: a \textit{data type} used to implement lists (with access at both ends) and stacks; - **skip** “rubber” lengths; - **str** \TeX{} strings: a special case of \texttt{tl} in which all characters have category “other” (catcode 12), other than spaces which are category “space” (catcode 10); - **tl** “token list variables”: placeholders for token lists. When the \textit{type} and \textit{module} are identical (as often happens in the more basic modules) the \textit{module} part is often omitted for aesthetic reasons. The name “token list” may cause confusion, and so some background is useful. \TeX{} works with tokens and lists of tokens, rather than characters. It provides two ways to store these token lists: within macros and as token registers (\texttt{toks}). The implementation in \LaTeX{}3 means that \texttt{toks} are not required, and that all operations for storing tokens can use the \texttt{tl} variable type. Experienced \TeX{} programmers will notice that some of the variable types listed are native \TeX{} registers whilst others are not. In general, the underlying \TeX{} implementation for a data structure may vary but the documented interface will be stable. For example, the \texttt{prop} data type was originally implemented as a \texttt{toks}, but is currently built on top of the \texttt{tl} data structure. 3.2.5 Variables: guidance Both comma lists and sequences have similar characteristics. They both use special delimiters to mark out one entry from the next, and are both accessible at both ends. In general, it is easier to create comma lists ‘by hand’ as they can be typed in directly. User input often takes the form of a comma separated list and so there are many cases where this is the obvious data type to use. On the other hand, sequences use special internal tokens to separate entries. This means that they can be used to contain material that comma lists cannot (such as items that may themselves contain commas!). In general, comma lists should be preferred for creating fixed lists inside programs and for handling user input where commas will not occur. On the other hand, sequences should be used to store arbitrary lists of data. \expl{} implements stacks using the sequence data structure. Thus creating stacks involves first creating a sequence, and then using the sequence functions which work in a stack manner (\texttt{\seq\_push:Nn}, etc.). Due to the nature of the underlying \TeX{} implementation, it is possible to assign values to token list variables and comma lists without first declaring them. However, this is not supported behavior. The \LaTeX{}3 coding convention is that all variables must be declared before use. The \expl{} package can be loaded with the \texttt{check-declarations} option to verify that all variables are declared before use. This has a performance implication and is therefore intended for testing during development and not for use in production documents. 3.2.6 Functions: argument specifications Function names end with an \langle arg-spec \rangle after a colon. This gives an indication of the types of argument that a function takes, and provides a convenient method of naming similar functions that differ only in their argument forms (see the next section for examples). The \langle arg-spec \rangle consists of a (possibly empty) list of letters, each denoting one argument of the function. The letter, including its case, conveys information about the type of argument required. All functions have a base form with arguments using one of the following argument specifiers: \begin{itemize} \item \texttt{n} Unexpanded token or braced token list. \item This is a standard \TeX{} undelimited macro argument. \end{itemize} N Single token (unlike n, the argument must not be surrounded by braces). A typical example of a command taking an N argument is \cs_set, in which the command being defined must be unbraced. p Primitive \TeX parameter specification. This can be something simple like #1#2#3, but may use arbitrary delimited argument syntax such as: #1,#2\q_stop#3. This is used when defining functions. T,F These are special cases of n arguments, used for the true and false code in conditional commands. There are two other specifiers with more general meanings: D Stands for Do not use. This special case is used for \TeX primitives. These functions have no standardized syntax, they are engine dependent and their name can change without warning, thus their use is strongly discouraged in package code: programmers should instead use the interfaces documented in interface3.pdf. w This means that the argument syntax is “weird” in that it does not follow any standard rule. It is used for functions with arguments that take non standard forms: examples are \TeX-level delimited arguments and the boolean tests needed after certain primitive \if commands. In case of n arguments that consist of a single token the surrounding braces can be omitted in nearly all situations—functions that force the use of braces even for single token arguments are explicitly mentioned. However, programmers are encouraged to always use braces around n arguments, as this makes the relationship between function and argument clearer. Further argument specifiers are available as part of the expansion control system. These are discussed in the next section. 4 Expansion control Let’s take a look at some typical operations one might want to perform. Suppose we maintain a stack of open files and we use the stack \g_ior_file_name_seq to keep track of them (ior is the prefix used for the file reading module). The basic operation here is to push a name onto this stack which could be done by the operation \seq_gpush:Nn \g_ior_file_name_seq {#1} where #1 is the filename. In other words, this operation would push the file name as is onto the stack. However, we might face a situation where the filename is stored in a variable of some sort, say \l_ior_curr_file_tl. In this case we want to retrieve the value of the variable. If we simply use \seq_gpush:Nn \g_ior_file_name_seq \l_ior_curr_file_tl we do not get the value of the variable pushed onto the stack, only the variable name itself. Instead a suitable number of \exp_after:wN would be necessary (together with extra braces) to change the order of expansion, \textit{i.e.} 3If a primitive offers a functionality not yet in the kernel, programmers and users are encouraged to write to the \LaTeX-L mailing list (mailto:LATEX-L@listserv.uni-heidelberg.de) describing their use-case and intended behaviour, so that a possible interface can be discussed. Temporarily, while an interface is not provided, programmers may use the procedure described in the \texttt{\LaTeX3-styleguide.pdf}. 4\exp_after:wN is the \LaTeX3 name for the \TeX \texttt{\expandafter} primitive. The above example is probably the simplest case but already shows how the code changes to something difficult to understand. Furthermore there is an assumption in this: that the storage bin reveals its contents after exactly one expansion. Relying on this means that you cannot do proper checking plus you have to know exactly how a storage bin acts in order to get the correct number of expansions. Therefore \LaTeX3 provides the programmer with a general scheme that keeps the code compact and easy to understand. To denote that some argument to a function needs special treatment one just uses different letters in the arg-spec part of the function to mark the desired behavior. In the above example one would write \seq_gpush:NV \g_ior_file_name_seq \l_ior_curr_file_tl to achieve the desired effect. Here the V (the second argument) is for “retrieve the value of the variable” before passing it to the base function. The following letters can be used to denote special treatment of arguments before passing it to the base function: \textbf{c} Character string used as a command name. The argument (a token or braced token list) is \textit{fully expanded}; the result must be a sequence of characters which is then used to construct a command name (via \csname ... \endcsname). This command name is a single token that is passed to the function as the argument. Hence \seq_gpush:cV \{ g_file_name_seq \} \l_tmpa_tl is equivalent to \seq_gpush:NV \g_file_name_seq \l_tmpa_tl. Full expansion means that (a) the entire argument must be expandable and (b) any variables are converted to their content. So the preceding examples are also equivalent to \tl_new:N \g_file_seq_name_tl \tl_gset:Nn \g_file_seq_name_tl \{ g_file_name_seq \} \seq_gpush:cV \{ \tl_use:N \g_file_seq_name_tl \} \l_tmpa_tl. (Token list variables are expandable and we could omit the accessor function \tl_- use:N. Other variable types require the appropriate \langle \texttt{var} \rangle \_use:N functions to be used in this context.) \textbf{V} Value of a variable. This means that the contents of the register in question is used as the argument, be it an integer, a length-type register, a token list variable or similar. The value is passed to the function as a braced token list. Can be applied to variables which have a \langle \texttt{var} \rangle \_use:N function (other than floating points and boxes), and which therefore deliver a single “value”. 9 v Value of a register, constructed from a character string used as a command name. This is a combination of c and V which first constructs a control sequence from the argument and then passes the value of the resulting register to the function. Can be applied to variables which have a \texttt{(var)}\texttt{\_use:N} function (other than floating points and boxes), and which therefore deliver a single “value”. x Fully-expanded token or braced token list. This means that the argument is expanded as in the replacement text of an \texttt{edef}, and the expansion is passed to the function as a braced token list. Expansion takes place until only unexpandable tokens are left. x-type arguments cannot be nested. e Fully-expanded token or braced token list which does not require doubled # tokens. This expansions is very similar to x-type but may be nested and does not require that # tokens are doubled. f Expanding the first token recursively in a braced token list. Almost the same as the x type except here the token list is expanded fully until the first unexpandable token is found and the rest is left unchanged. Note that if this function finds a space at the beginning of the argument it gobbles it and does not expand the next token. o One-level-expanded token or braced token list. This means that the argument is expanded one level, as by \texttt{expandafter}, and the expansion is passed to the function as a braced token list. Note that if the original argument is a braced token list then only the first token in that list is expanded. In general, using V should be preferred to using o for simple variable retrieval. 4.1 Simpler means better Anyone who programs in \TeX{} is frustratingly familiar with the problem of arranging that arguments to functions are suitably expanded before the function is called. To illustrate how expansion control can bring instant relief to this problem we shall consider two examples copied from \texttt{latex.ltx}. \begin{verbatim} \global\expandafter\let \csname\cf@encoding \string#1\expandafter\endcsname \csname ?\string#1\endcsname \end{verbatim} This first piece of code is in essence simply a global \texttt{\let} whose two arguments firstly have to be constructed before \texttt{\let} is executed. The \texttt{#1} is a control sequence name such as \texttt{\textcurrency}. The token to be defined is obtained by concatenating the characters of the current font encoding stored in \texttt{\cf@encoding}, which has to be fully expanded, and the name of the symbol. The second token is the same except it uses the default encoding ?. The result is a mess of interwoven \texttt{\expandafter} and \texttt{\csname} beloved of all \TeX{} programmers, and the code is essentially unreadable. Using the conventions and functionality outlined here, the task would be achieved with code such as this: \begin{verbatim} \cs_gset_eq:cc \{ \cf@encoding \token_to_str:N \#1 \} \{ \? \token_to_str:N \#1 \} \end{verbatim} The command \texttt{\cs_gset_eq:cc} is a global \texttt{\let} that generates command names out of both of its arguments before making the definition. This produces code that is far more readable and more likely to be correct first time. (\texttt{\token_to_str:N} is the \LaTeX{} name for \texttt{\string}.) Here is the second example. \begin{verbatim} \expandafter\in@\csname sym#3\endcsname\expandafter\{\group@list\} \end{verbatim} This piece of code is part of the definition of another function. It first produces two things: a token list, by expanding \texttt{\group@list} once; and a token whose name comes from ‘\texttt{sym#3}’. Then the function \texttt{\in@} is called and this tests if its first argument occurs in the token list of its second argument. Again we can improve enormously on the code. First we shall rename the function \texttt{\in@}, which tests if its first argument appears within its second argument, according to our conventions. Such a function takes two normal “n” arguments and operates on token lists: it might reasonably be named \texttt{\tl_test_in:nn}. Thus the variant function we need would be defined with the appropriate argument types and its name would be \texttt{\tl_test_in:cV}. Now this code fragment would be simply: \begin{verbatim} \tl_test_in:cV { sym #3 } \group@list \end{verbatim} This code could be improved further by using a sequence \texttt{\l_group_seq} rather than the bare token list \texttt{\group@list}. Note that, in addition to the lack of \texttt{\expandafter}, the space after the \texttt{)} is silently ignored since all white space is ignored in this programming environment. ### 4.2 New functions from old For many common functions the \LaTeX{} kernel provides variants with a range of argument forms, and similarly it is expected that extension packages providing new functions will make them available in all the commonly needed forms. However, there will be occasions where it is necessary to construct a new such variant form; therefore the expansion module provides a straightforward mechanism for the creation of functions with any required argument type, starting from a function that takes “normal” \TeX{} undelimited arguments. To illustrate this let us suppose you have a “base function” \texttt{\demo_cmd:Nnn} that takes three normal arguments, and that you need to construct the variant \texttt{\demo_cmd:cnx}, for which the first argument is used to construct the \texttt{name} of a command, whilst the third argument must be fully expanded before being passed to \texttt{\demo_cmd:Nnn}. To produce the variant form from the base form, simply use this: \begin{verbatim} \cs_generate_variant:Nn \demo_cmd:Nnn { cnx } \end{verbatim} This defines the variant form so that you can then write, for example: \begin{verbatim} \demo_cmd:cnx { abc } { pq } { \rst \xyz } \end{verbatim} rather than ... well, something like this! \def \tempa {{pq}}\% \edef \tempb {\rst \xyz}\% \expandafter \demo@cmd:nnn\% \csname abc\% \expandafter \expandafter \expandafter \expandafter \endcsname\% \expandafter \tempa\% \expandafter \expandafter \expandafter \expandafter \endcsname\% \expandafter \tempb\% \% Another example: you may wish to declare a function \demo.cmd_b:xcxcx, a variant of an existing function \demo.cmd_b:nnnnn, that fully expands arguments 1, 3 and 5, and produces commands to pass as arguments 2 and 4 using \csname. The definition you need is simply \cs_generate_variant:Nn \demo.cmd_b:nnnnn { xcxcx }% This extension mechanism is written so that if the same new form of some existing command is implemented by two extension packages then the two definitions are identical and thus no conflict occurs. 5 The distribution The expl3 modules are designed to be loaded on top of \LaTeX. The core expl3 language is broadly stable, and thus the syntax conventions and functions provided are now ready for wider use. There may still be changes to some functions, but these will be minor when compared to the scope of expl3. A robust mechanism is in place for such deprecations. The distribution of expl3 is split up into three packages on CTAN: l3kernel, l3packages and l3experimental. For historical reasons, \RequirePackage{expl3} loads the code now distributed as l3kernel. This monolithic package contains all of the modules regarded by the team as stable, and any changes in this code are very limited. This material is therefore suitable for use in third-party packages without concern about changes in support. All of this code is documented in interface3.pdf. With an up-to-date \LaTeX kernel, this code is built into the format files and therefore can be used without any further steps. The material in l3packages is also stable, but is not always at a programming level: most notably, xparse is stable and suitable for wider use. Finally, l3experimental contains modules ready for public use but not yet integrated into l3kernel. These modules have to be loaded explicitly. The team anticipate that all of these modules will move to stable status over time, but they may be more flexible in terms of interface and functionality detail. Feedback on these modules is extremely valuable. 6 Moving from \LaTeX{} 2ε to expl3 To help programmers to use expl3 code in existing \LaTeX{} 2ε package, some short notes on making the change are probably desirable. Suggestions for inclusion here are welcome! Some of the following is concerned with code, and some with coding style. - expl3 is mainly focused on programming. This means that some areas still require the use of \LaTeX{} 2ε internal macros. For example, you may well need \string\@ifpackageloaded, as there is currently no native expl3 package loading module. - User level macros should be generated using the mechanism available in the xparse package, which is part of the l3package bundle. - At an internal level, most functions should be generated \string\long (using \string\cs_new:Npn) rather than “short” (using \string\cs_new:nopar:Npn). - Where possible, declare all variables and functions (using \string\cs_new:Npn, \string\tl_new:N, etc.) before use. - Prefer “higher-level” functions over “lower-level”, where possible. So for example use \string\cs_if_exist:NTF and not \string\if\cs_exist:N. - Use space to make code readable. In general, we recommend a layout such as: \begin{verbatim} \cs_new:Npn \foo_bar:Nn #1#2 {\cs_if_exist:NTF #1 { \__foo_bar:n {#2} } { \__foo_bar:nn {#2} { literal } } } \end{verbatim} where spaces are used around { and } except for isolated #1, #2, etc. - Put different code items on separate lines: readability is much more useful than compactness. - Use long, descriptive names for functions and variables, and for auxiliary functions use the parent function name plus aux, auxi, auxii and so on. - If in doubt, ask the team via the LaTeX-L list: someone will soon get back to you! 7 Load-time options for expl3 To support code authors, the expl3 package for \LaTeX{} 2ε includes a small number of load-time options. These all work in a key–value sense, recognising the true and false values. Giving the option name alone is equivalent to using the option with the true value. check-declarations (env.) All variables used in expl3 code should be declared. This is enforced by \TeX{} for vari- able types based on \TeX registers, but not for those which are constructed using macros as the underlying storage system. The \texttt{check-declarations} option enables checking for all variable assignments, issuing an error if any variables are assigned without being initialised. See also \debug_on:n \{check-declarations\} in l3candidates for finer control. \texttt{log-functions (env.)} The \texttt{log-functions} option is used to enable recording of every new function name in the .log file. This is useful for debugging purposes, as it means that there is a complete list of all functions created by each module loaded (with the exceptions of a very small number required by the bootstrap code). See also \debug_on:n \{log-functions\} in l3candidates for finer control. \texttt{enable-debug (env.)} To allow more localized checking and logging than provided by \texttt{check-declarations} and \texttt{log-functions}, expl3 provides a few \debug\text\ldots functions (described elsewhere) that turn on the corresponding checks within a group. These functions can only be used if expl3 is loaded with the \texttt{enable-debug} option. \texttt{backend (env.)} Selects the backend to be used for color, graphics and related operations that are backend-dependent. Options available are \begin{itemize} \item \texttt{dvips} Use the dvips driver. \item \texttt{dvipdfmx} Use the dvipdfmx driver. \item \texttt{dvisvgm} Use the dvisvgm driver. \item \texttt{luatex} Use the direct PDF output mode of Lua\TeX \item \texttt{pdftex} Use the direct PDF output mode of pdf\TeX \item \texttt{xetex} Use the \texttt{Xe\TeX} version of the dvipdfmx driver. \end{itemize} For historical reasons, there is also \texttt{pdfmode} as an equivalent of \texttt{luatex} or \texttt{pdftex}, and \texttt{xdvipdfmx} as an equivalent to \texttt{xetex}, but these are deprecated \texttt{suppress-backend-headers (env.)} The \texttt{suppress-backend-headers} option suppresses loading of backend-specific header files; currently this only affects \texttt{dvips}. This option is available to support DVI-based routes that do not support the \texttt{header} line used by \texttt{dvips}. The debugging options may also be given using \texttt{\keys_set:nn \{ sys \} \{ \ldots \}}; the \texttt{backend} option can be given in this way only if a backend has not already been loaded. This method of setting options is useful where expl3 is pre-loaded by the \LaTeX\textit{2ε} format. 8 Using expl3 with formats other than \LaTeX\textit{2ε} As well as the \LaTeX\textit{2ε} package expl3, there is also a “generic” loader for the code, \texttt{expl3-generic.tex}. This may be loaded using the plain \TeX syntax \begin{verbatim} \input expl3-generic % \end{verbatim} This enables the programming layer to work with the other formats. As no options are available loading in this way, the “native” drivers are automatically used. If this “generic” loader is used with \LaTeX\textit{2ε} the code automatically switches to the appropriate package route. After loading the programming layer using the generic interface, the commands \texttt{\ExplSyntaxOn} and \texttt{\ExplSyntaxOff} and the code-level functions and variables detailed in \texttt{interface3} are available. Note that other \LaTeX\textit{2ε} packages using expl3 are not loadable: package loading is dependent on the \LaTeX\textit{2ε} package-management mechanism. 9 Engine/primitive requirements To use expl3 and the higher level packages provided by the team, the minimal set of primitive requirements is currently - All of those from \TeX90. - All of those from \e-\TeX excluding \texttt{\TeXeTstate}, \beginL, \beginR, \endL and \endR (\textit{i.e.} excluding \TeX--\TeX). - Functionality equivalent to the pdf\TeX primitive \texttt{\pdfstrcmp}. Any engine which defines \texttt{\pdfoutput} (\textit{i.e.} allows direct production of a PDF file without a DVI intermediate) must also provide \texttt{\pdfcolorstack}, \texttt{\pdfliteral}, \texttt{\pdfmatrix}, \texttt{\pdfrestore} and \texttt{\pdfsave} or equivalent functionality. Fully Unicode engines must provide a method for producing character tokens in an expandable manner. Practically, these requirements are met by the engines - pdf\TeX v1.40 or later. - \Xe\TeX v0.99992 or later. - Lua\TeX v0.95 or later. - e-(u)p\TeX mid-2012 or later. Additional modules beyond the core of expl3 may require additional primitives. In particular, third-party authors may significantly extend the primitive coverage requirements. 10 The \LaTeX Project Development of \LaTeX3 is carried out by The \LaTeX Project: \url{https://www.latex-project.org/latex3/}. References Index The italic numbers denote the pages where the corresponding entry is described, numbers underlined point to the definition, all others indicate the places where it is used. Symbols ⟨var⟩ commands: \langle var⟩_use:N .......................... 9, 10 B backend (option) .......................... 14 box commands: \l_tmpa_box ................................. 3 C check-declarations (option) .............. 13 cs commands: \cs_gset_eq:NN ............................ 11 \cs_if_exist:NTF ............................ 13 \cs_new:Npn ................................ 13 \cs_new_nopar:Npn .......................... 13 D debug commands: \debug_on:n ............................... 14 E enable-debug (option) ..................... 14 exp commands: \exp_after:wN .............................. 8, 9 \ExplSyntaxOff ............................. 14 \ExplSyntaxOn .............................. 14 I if commands: \if_cs_exist:N ............................. 4, 13 int commands: \g_tmpa_int ............................... 3 L log-functions (option) .................... 14 O options: backend ................................. 14 token commands: \token_to_str:N ............................ 11 S seq commands: \seq_gpush:Nn .............................. 8, 9 \seq_push:Nn ............................... 3, 7 suppress-backend-headers (option) ...... 14 T\TEX and \LaTeX2\epsilon commands: @ifpackageloaded .......................... 13 \box ........................................ 2 \csname ................................... 9, 10, 12 \edef ....................................... 10 \endcsname .................................. 9 \expandafter ................................ 8, 10, 11 \iffalse .................................... 6 \iftrue .................................... 6 \in@ ......................................... 11 \let ......................................... 11 \long ....................................... 13 \string ..................................... 2 tl commands: \c_empty_tl ............................... 3 \tl_gset:Nn ................................. 4, 6, 9 \tl_new:N .................................. 9, 13 \tl_set:Nn .................................. 4–6 \tl_set_eq:NW ............................... 5 \tl_use:N .................................... 9 \l_tmpa_tl .................................. 9
{"Source-Url": "https://mirror.las.iastate.edu/tex-archive/macros/latex/contrib/l3kernel/expl3.pdf", "len_cl100k_base": 9972, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 42447, "total-output-tokens": 11121, "length": "2e13", "weborganizer": {"__label__adult": 0.0002155303955078125, "__label__art_design": 0.00031447410583496094, "__label__crime_law": 0.0001323223114013672, "__label__education_jobs": 0.0003437995910644531, "__label__entertainment": 4.553794860839844e-05, "__label__fashion_beauty": 6.777048110961914e-05, "__label__finance_business": 0.00012993812561035156, "__label__food_dining": 0.00018107891082763672, "__label__games": 0.00030422210693359375, "__label__hardware": 0.0003590583801269531, "__label__health": 0.0001170039176940918, "__label__history": 0.00011557340621948242, "__label__home_hobbies": 5.316734313964844e-05, "__label__industrial": 0.0001678466796875, "__label__literature": 0.00014293193817138672, "__label__politics": 0.00010114908218383788, "__label__religion": 0.00022912025451660156, "__label__science_tech": 0.0023174285888671875, "__label__social_life": 5.0067901611328125e-05, "__label__software": 0.0082855224609375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00011330842971801758, "__label__transportation": 0.0001728534698486328, "__label__travel": 0.0001176595687866211}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42881, 0.01505]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42881, 0.74707]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42881, 0.84904]], "google_gemma-3-12b-it_contains_pii": [[0, 2468, false], [2468, 5598, null], [5598, 8428, null], [8428, 11889, null], [11889, 13861, null], [13861, 16158, null], [16158, 19539, null], [19539, 22647, null], [22647, 25094, null], [25094, 28127, null], [28127, 30998, null], [30998, 33141, null], [33141, 35445, null], [35445, 38852, null], [38852, 40530, null], [40530, 42881, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2468, true], [2468, 5598, null], [5598, 8428, null], [8428, 11889, null], [11889, 13861, null], [13861, 16158, null], [16158, 19539, null], [19539, 22647, null], [22647, 25094, null], [25094, 28127, null], [28127, 30998, null], [30998, 33141, null], [33141, 35445, null], [35445, 38852, null], [38852, 40530, null], [40530, 42881, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42881, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42881, null]], "pdf_page_numbers": [[0, 2468, 1], [2468, 5598, 2], [5598, 8428, 3], [8428, 11889, 4], [11889, 13861, 5], [13861, 16158, 6], [16158, 19539, 7], [19539, 22647, 8], [22647, 25094, 9], [25094, 28127, 10], [28127, 30998, 11], [30998, 33141, 12], [33141, 35445, 13], [35445, 38852, 14], [38852, 40530, 15], [40530, 42881, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42881, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
23856b106d6433e578816d22ecbc13d31dcad3ba
Trends in Data Locality Abstractions for HPC Systems To cite this version: Didem Unat, Anshu Dubey, Torsten Hoefler, John Shalf, Mark Abraham, et al.. Trends in Data Locality Abstractions for HPC Systems. IEEE Transactions on Parallel and Distributed Systems, 2017, 28 (10), pp.3007 - 3020. 10.1109/TPDS.2017.2703149 . hal-01621371 HAL Id: hal-01621371 https://inria.hal.science/hal-01621371 Submitted on 24 Oct 2017 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Trends in Data Locality Abstractions for HPC Systems Abstract—The cost of data movement has always been an important concern in high performance computing (HPC) systems. It has now become the dominant factor in terms of both energy consumption and performance. Support for expression of data locality has been explored in the past, but those efforts have had only modest success in being adopted in HPC applications for various reasons. However, with the increasing complexity of the memory hierarchy and higher parallelism in emerging HPC systems, locality management has acquired a new urgency. Developers can no longer limit themselves to low-level solutions and ignore the potential for productivity and performance portability obtained by using locality abstractions. Fortunately, the trend emerging in recent literature on the topic alleviates many of the concerns that got in the way of their adoption by application developers. Data locality abstractions are available in the forms of libraries, data structures, languages and runtime systems; a common theme is increasing productivity without sacrificing performance. This paper examines these trends and identifies commonalities that can combine various locality concepts to develop a comprehensive approach to expressing and managing data locality on future large-scale high-performance computing systems. Index Terms—Data locality, programming abstractions, high-performance computing, data layout, locality-aware runtimes 1 INTRODUCTION The computing industry has entered a period of technology transition as we strive for the next 1000x performance improvement over the previous generation of petaflops-scale computing platforms. Over the past 30 years, we have come to expect a 1,000x increase in HPC system performance via technology scaling. With the end of conventional improvements to technology (Dennard scaling), which started in approximately 2004, single processing core performance has ceased to improve with each generation. The industry has adopted a new approach to performance scaling by packing more cores into each processor chip. This multi-core approach continues to drive up the theoretical peak performance of the processing chips, and the computing industry is on track to have chips with thousands of cores by 2020 [43]. The other consequence of the new technology scaling trend is that the energy efficiency of transistors is improving as their sizes shrink, but the energy efficiency of wires is not improving. Therefore, the relative cost of computation to data movement has become further skewed in favor of computation. By 2018, further improvements to compute efficiency will be undercut by the energy required to move data to the computational cores on a chip [1] and are manifested in substantial bandwidth tapering at every level of the memory and communication hierarchy. Bandwidth tapering has been a challenge since the dawn of cache hierarchies, and the remedies (loop blocking, strip-mining, tiling, domain decomposition, and communication optimizations/topology mapping) have been studied for decades. Although the research community has developed extensive compiler and library solutions, only a fraction of these are available in general-purpose systems. Furthermore, with the increase in the parallelism and memory hierarchy going from system to node to compute unit level, the already difficult task of managing parallelism has become much more complex. The solutions for bandwidth tapering challenges now need their counterparts at the intranode, internode and global system level communication. Moreover, the dual tension of increasing levels of parallelism and core heterogeneity create an intractable explosion in complexity. There is an urgent need for higher level of abstraction in order to shield the applications developers from this complexity and reduce the effort needed to port codes to different computing platforms. One critical abstraction needed for future tractability of the application space is data locality; a way of expressing computations so that information about proximity of data to be used can be communicated to the optimizing software stack. The impact of data locality optimization has moved from being a tuning option to a central feature of code writing to get any performance improvement at all. There needs to be a formalization of commonly used approaches to make the implementations reusable and parametrizable so that a common data abstractions can be used portably and flexibly across multiple architectures, without manually re-tuning for each new system. The need for performance portability is on the rise in direct correlation with the rise in platform heterogeneity. Application developers have begun to realize the enormity of the challenge facing them and have started a dialogue with researchers in programming abstractions to look for effective solutions. This development has opened up a real opportunity for the higher level abstractions to gain traction in the applications communities, especially when the application developers are kept in the loop. We conducted a series of workshops on the topic of programming abstractions for data locality for high performance computing (HPC) that gather practitioners and researchers from all applicable areas, including the computational scientists from multiple science domains [55], [56], [67]. This survey paper distills the outcomes of the series thus far. The objective of this effort is to facilitate the development of this critical research area by; (1) defining a common terminology to facilitate future exchange of ideas in the field, (2) describe the current trends in various research domains that directly influence data locality, and (3) recommend directions for future research. We do not claim to have solved or covered every aspect of this enormous challenge, however, the interdisciplinary exchanges between domain scientists and computer scientists at the workshop, and dissemination of the gathered knowledge plays an essential role in maintaining forward progress in this area. Locality can be expressed and managed at various level in the computational ecosystem. The bulk of the paper is divided into sections corresponding to research areas that are actively engaged in exploring the issues of locality. Section 2 defines common terminology used to describe the state of the art in concerned research areas. We examine data locality in the context of data structures and library support in Section 3, language and compiler support in Section 4, runtime approaches in Section 5, and systems level support in Section 6. All of these research areas have one goal in common, to help applications effectively use the machine for computational science and engineering. Section 7 serves two purposes, it describes challenges and expectations from application developers, which in turn provide perspective and cohesiveness to the research areas discussed in earlier sections. We summarize our findings in Section 8. 2 TERMINOLOGY We begin by defining commonly used terminology in describing efforts aimed at addressing data locality. Data locality is indicative of how close data is to where it needs to be processed, shorter distance imply better data locality. A data structure is the organization of a data type onto some particular memory architecture. The memory subsystem is composed of several memory arrays, which can be defined as memory spaces. Not all memory spaces can be managed directly by the programmer (e.g. caches). However, new architectures tend to have multiple user-manageable memory spaces with varying performance characteristics and usage restrictions (e.g., constant memory of GPUs). Application performance is constrained by both time and energy costs of moving data in service of the computation, which is directly affected by the data access pattern. The data access pattern is a composition of data layout, data decomposition, data placement, task placement, and how the parallel tasks traverse the data structure. Figure 1 illustrates these concepts.1 Given a data type and memory space (e.g. an array of memory cells), we define data layout as an injective mapping from the elements of the data type to the cells of the single memory space. By extension, we define a distributed layout as the mapping of the elements to multiple memory spaces. Usually a layout can be considered a parameter of a data structure. Layout affects data access patterns, and hence performance, therefore, selecting an appropriate map to data structures is important. Data decomposition is the way that data is partitioned into smaller chunks that can be assigned to different memory spaces for introducing data parallelism or improving data locality. Data placement is the mapping of the chunks of data from a domain-decomposed data structure to memory spaces. Task placement is the assignment of threads of execution to a particular physical processor resource and its related set of memory spaces. Many contemporary programming environments do not offer an automatic method to directly relate the task placement to the data placement, aside from loose policies such as first touch memory affinity. Index space defines the index domain for data. Iteration space refers to the set of points in a multi-dimensional loop nest irrespective of traversal order. The dimensionality of the iteration space is typically defined in terms of the number of loop nests (e.g., an N-nested loop defines an N-dimensional iteration space). Traversal order indicates the order in which the loop nest visits these indices. A tiling layout is often used to exploit the locality of hierarchical memories. This layout can be viewed as adding additional dimensions to the iteration space in order to identify the tiles and the elements within the tiles. Thus a fully 1. The figure is inspired by Fuchs and Fuerlinger [21]. 3.1 Key Points We identified two design principles as important and desired by application programmers: algorithmic execution dependencies and separation of concerns. Note that, in this Section, we use the term application as the user of a library, which in a layered software architecture may be another, higher-level, library or domain-specific language. 3.1.1 Algorithmic Execution Dependence In general determining what layout an algorithm should use is difficult. The implementation of an algorithm is written by accessing data elements through some interfaces, for instance using a tuple of indices to access a multidimensional array. An implementation can leverage temporal locality by accessing the same data elements multiple times, and spatial locality by accessing nearby data elements, where nearby is here a logical concept related to the abstract data type, and not to the implementation of the data structure. We refer to this locality as algorithmic locality. The optimal data locality of the implementation is reached when the data structure layout, in memory, lets the algorithm find the corresponding elements in the closest possible location (relative to the processing elements used by the threads). Different implementations have different algorithmic localities and therefore require different data structures layouts. Typically, the algorithmic locality also depends upon input, so the layout can be chosen only for the likelihood of locality. For certain applications such as linear algebra or finite-difference stencils, the trace can be determined by a few simple parameters such as array sizes, which can be used to determine the best layout effectively. These cases are well represented in HPC applications, and several solutions have been implemented to exploit locality, especially in computing nodes. Multi-node implementations require the integration with the system-scale locality management discussed in Section 6. Note that we highlight the usefulness of picking the best layout by analyzing the abstract algorithm instead of re-structuring an existing code. The latter would usually lead to convoluted, non-portable, and unmaintainable code. A library of algorithms and data structures should enable an efficient coupling of algorithms and data structures mappings. Libraries differ on how this coupling can be specified or found, and how concerns are separated between application programmers and library programmers. 3.1.2 Separation of Concerns Separation of concerns is a fundamental motivation for developing libraries of algorithms and data structures to be used within applications. A well-defined separation of concerns clearly identifies who (library or application) is responsible for what. Our focus is on managing data locality, so we limit this what to the mapping of data structures to memory spaces and algorithms to execution spaces. A parallel-enabling library should provide concurrent threads. Different solutions differ on the guarantees they provide for safety, progress, and placement of threads. Low level libraries like pthreads leave all these concerns to the application, others offer different levels of guarantees depending on the applications (and programmers) they are targeting. An emerging trend in HPC is to delegate responsibilities to libraries, compilers, and runtime systems. **Separating by Parallel Patterns** A common separation is a parallel pattern and code body that is executed within a pattern. This can also be explained as decoupling loop iteration space from the loop body itself. For example, a loop over a range of integers (e.g., FORTRAN do-loop, C/C++ for-loop) is a serial loop pattern that executes a loop body (codelet). Depending on the inter loop actions performed by the loop body this serial loop pattern can often be simply translated to the `foreach` , `reduce`, or `scan` (i.e., prefix sum) data-parallel pattern. Other patterns are possible, as stencil-like iterations on multidimensional arrays. In this strategy the application is responsible for identifying the parallel pattern and providing the codelet that is (thread) safe to execute within that pattern. The library is then responsible for mapping execution of that codelet onto the execution space according to the pattern and for managing the pattern’s inter thread interactions. For example, a parallel reduce requires thread-local temporary values and inter thread reduction of those temporary values. **Separating Location Policies** The mapping of application code bodies via parallel patterns has a spatial and temporal scheduling consideration: for example, on which core or when the execution of the code body will occur and whether the mapping is static or dynamic. We label the set of parameters that govern the answers to these questions as an location policy. The number and extensibility of such parameters that a parallel-enabling library has and exposes define the flexibility of that library. **Separating Data Structure Layout** Within a memory space a computer language hard codes the layout of their data structures, for example FORTRAN arrays, C/C++ arrays, or C/C++ classes. A library can define data types that abstracts the layout specification from the data type mapping. The parameter(s) of this specification may be static (defined at compile time) or dynamic (defined at runtime) and affect the compiler’s ability to optimize code accordingly. A library can also define data types with distributed layouts that span multiple memory spaces and can define operations for moving data between memory spaces. The flexibility of this strategy is limited by the layout capabilities and their extensibility. **Integrating Separations** These separation-of-concerns strategies can provide significant flexibility through high-level abstractions (spaces, patterns, policies, layouts). However, the data locality and thus the performance of a parallel algorithm is determined by the mappings (data and execution) to hardware that are implemented by these abstractions. Thus, the integrated set of parameters for these abstractions must be chosen appropriately for the algorithm and underlying hardware in order to achieve locality and thus performance. A well-designed parallel-enabling library will provide and expose these parameters such that changing the underlying hardware requires no changes to the application codelets and trivial changes to the abstractions’ parameters. Such parameter changes could even be chosen automatically based on the target hardware architecture. ### 3.2 State of the Art Within the confines of existing language standards one is constrained to leveraging market breadth of the supporting tool chain (e.g., compilers, debuggers, profilers). Wherever profitable, the research plan can re-define existing languages by amending or extending them (e.g., by changing the specifications or by introducing new APIs). Examples include Kokkos [19], TiDA [66], GridTools [7], hStreams [35], and DASH [22]. The Kokkos library supports expressing multidimensional arrays in C++, in which the polymorphic layout can be decided at compile time. An algorithm written with Kokkos uses the abstract machine of C++ with the data specification and access provided by the interface of Kokkos arrays. Locality is managed explicitly by matching the data layout with the algorithmic locality. TiDA allows the programmer to express data locality and layout at the array construction. Under TiDA, each array is extended with metadata that describes its layout and tiling policy and topological affinity for an efficient mapping on cores. Like Kokkos, the metadata describing the layout of each array is carried throughout the program and into libraries, thereby offering a pathway to better library composability. TiDA is currently packaged as Fortran and C++ libraries and adopted by the BoxLib AMR framework [75]. GridTools provides a set of libraries for expressing distributed memory implementations of regular grid applications, such as stencils on regular and icosahedral grids. It is not meant to be universal, in the sense that non regular grid applications should not be expressed using GridTools libraries. Since the constructs provided by GridTools are high level and semi-functional, locality issues are taken into account at the level of performance tuners and not by application programmers [28]. It expects the application to use its patterns. The hStreams library provides mechanisms for expressing and implementing data decomposition, distribution, data binding, data layout, data reference characteristics, and location policy on heterogeneous platforms. DASH is built on a one-sided communication substrate. and provides a PGAS (Partitioned Global Address Space) abstraction in C++ using operator overloading. The DASH abstract machine is basically a distributed parallel machine with the concept of hierarchical locality. It is a very general library designed to address scaling of applications at system scale, while leaving the managing of threads in a node to the application. Table 1 offers a quick comparison between the libraries presented in this Section. This is intended to be a simple sketch and should not be treated as a comprehensive comparison of these quite complex and rich libraries. Clearly, no single way of treating locality concerns exists, nor is there consensus on which one is the best. Each of these approaches is appealing in different scenarios that depend on the scope of the particular application domain. The opportunity arises for naturally building higher-level interfaces by using lower-level ones. For instance, TiDA or DASH multidimensional arrays could be implemented using Kokkos arrays, or GridTools parallel algorithms could use the DASH library and Kokkos arrays for storage. This is a potential benefit from interoperability that arises from using a common language provided with generic programming capabilities. One outcome of the survey is to initiate efforts to explicitly define the requirements for a common runtime infrastructure that could be used interoperably across these library solutions. 4 LANGUAGE AND COMPILER SUPPORT FOR DATA LOCALITY While significant advances have been seen in libraries for existing programming languages, especially C++, in facilities that allow for data-locality optimizations, significant limitations remain in what can be accomplished with libraries alone. C/C++ and Fortran, which dominate the high-performance computing landscape, offer limited facilities for compile-time introspection. By contrast, custom languages are designed to present language-intrinsic abstractions that allow the programmer to explicitly expose parallelism and locality. Such abstractions in turn significantly simplify compiler analysis and optimization and also assist locality management at both runtime and system levels discussed in Sections 5 and 6. 4.1 Key Points The following features are key to understanding and designing for data locality from the language and compiler perspective. 4.1.1 Object Visibility One of the most significant axes in the relevant design space is the choice between local-by-default and global-by-default object visibility. Local-by-default visibility (or local-only visibility) is familiar to any user of MPI, and message-passing is still often an effective way to optimize for data locality. MPI, however, is not the only common example; most GPU-targeted programming models (OpenCL, CUDA, etc.) explicitly represent local memory domains and force the programming to arrange any necessary transfers. The disadvantage of local-by-default visibility, however, is that it tends to be cumbersome to use. Furthermore, programmer productivity can be low because data locality must be managed in every part of the code, even where performance is not critical or the necessary management logic is boilerplate. Two commonplace language-design techniques improve upon this local-by-default situation. The first, exemplified by Loci [45], provides a programming environment in which declarative annotations, and other functional programming techniques can be employed to drive the automated generation of the communication-on-manage and task-scheduling logic. Declarative solutions tend to have much greater semantic freedom than those embedded in imperative programming languages, allowing more invasive transformations between the input and the resulting implementation. The disadvantage of such systems tends to be generality, and such systems tend to be domain specific. The second commonplace technique to improve upon the local-by-default situation is to move toward a global-by-default model, at least for certain classes of objects. PGAS models, now widely available from Fortran Co-Arrays [47], Chapel [14], Julia [38], and many other languages, provide some differentiation between local and global objects but allow global access without explicit regard for locality considerations. The compiler and/or runtime system might optimize layout and placement of objects based on their global access pattern, but the degree to which this optimization can be usefully done is still an open question. On the far end of the spectrum are solutions that do not expose any data-locality information to the user directly but depend solely on compilers and runtime libraries to perform any desirable data-locality optimizations. OpenMP falls into this camp, and current experience suggests that the more advanced data locality optimizations sought might prove indefinitely out of reach for its trivial user-facing locality model. One might argue that such optimizations are more important for tools not restricted to the shared-memory part of the hierarchy; but experience suggests that between NUMA and the proliferation of cores per node, data-locality optimizations are important both on-node and over distributed-memory systems. 4.1.2 Requirements Effective abstractions for data locality need to have low overhead and high-level semantic information, including information about data dependencies needed by the compiler’s optimizer and runtime library. Dealing with side-effects is key to dependence analysis, and this is an area in which declarative solutions and novel languages often hold distinct advantages because traditional languages make conservative assumptions about the behavior of external function calls. The abstractions need to cover data movement, be it automatic (via caching or similar) or explicit; different levels of control are desirable for different use cases. Profiling, auto-tuning and user feedback are important additions to purely static determinations, and while user-provided hints will remain important, only tools using these more automated measurement-driven techniques are likely to scale to large codes. Finally, the abstractions have to be composable as no convergence exists yet on what are the most productive paradigms for portable high-performance codes. While the hierarchical nature of modern hardware is well established, the extent and semantics of exposure to the users are not yet settled; and the optimal answer may be domain specific. Some solutions may be specific to parts of the hierarchy; and an overall solution may require separate tools for different parts of the solution, making composability a key requirement. A generic goal, at a programmatic level, is to encourage programmers to expose all available parallelism in their source code and let the compiler and/or runtime system choose how to best use that freedom on a particular hardware architecture. In practice, this means that the parallelism often needs to be coarsened into larger task units. For example, even if all discretized grid points are independent, having one dispatched task per grid point is likely impractical. The space of potential coarsenings often grows quickly, and so some combination of profile-driven feedback and auto-tuning, user-provided grouping preferences, and heuristics are necessary in practical tools. We also note that even within a particular coarsening scheme, task-execution ordering is important to preserve locality, and ordering considerations must be part of the relevant cost model. ### 4.1.3 Adoption Regardless of the flavor of the solution, widespread adoption can be supported only if the implementations are treated as proper software engineering projects. It is critical to have invested stakeholders because these projects often involve long time horizons and considerable infrastructure work. They also need a coherent support model and quick bug fixes. Adoption is also greatly enhanced for tools with a small initial learning curve and those that enable incremental transitioning from existing codebases to new ones. ### 4.2 State of the Art Advances are being made in both C++ and FORTRAN. In C++ memory-aliasing attributes and parallel-algorithm abstractions are being designed, while in FORTRAN PGAS-style Co-Arrays [51] are now part of the standard. New languages, both general-purpose languages such as Chapel and Julia and domain-specific languages such as Loci, have production-quality implementations and growing user communities. Custom languages have also benefited from strong community compiler infrastructures, which enable functionality reuse. Higher-level tools need standardized, or at least well-supported, lower-level interfaces upon which to build. We also note that the line between the language and library is fuzzy in terms of capability and responsibility, and successful programming models often combine a targeted set of language capabilities with strong libraries built on top of those facilities. Chapel [14] is an emerging language that uses a first-class language-level feature, the locale, to represent regions of locality in the target architecture. Programmers can reason about the placement of data and tasks on the target architecture using Chapel’s semantic model, or runtime queries. Chapel follows the PGAS philosophy, supporting direct access to variables stored on remote locales based on traditional lexical scoping rules. Chapel also follows the multiresolution philosophy by supporting low-level mechanisms for placing data or tasks on specific locales, as well as high-level mechanisms for mapping global-view data structures or parallel loops to the locales. Advanced users may implement these data distributions and loop decompositions within Chapel itself and can even define the model used to describe a machine’s architecture in terms of locales. X10 [60] is another PGAS language that uses places as analogues to Chapel’s locales. In X10, execution must be colocated with data. Operating on remote data requires spawning a task at the place that owns the data. The user can specify that the new task run asynchronously, in which case it can be explicitly synchronized later and any return value accessed through a future. Thus, X10 makes communication explicit in the form of remote tasks. Hierarchical Place Trees [72] extend X10’s model of places to arbitrary hierarchies, allowing places to describe every location in a hierarchical machine. Unified Parallel C (UPC), Co-Array Fortran (CAF), and Titanium [73] are three of the founding PGAS languages. UPC supports global-view data structures and syntactically invisible communication while CAF has local-view data structures and syntactically evident communication. Titanium has a local-view data model built around ZPL-style multidimensional arrays [15]. Its type system distinguishes between data guaranteed to be local and data that may be remote, using annotations on variable declarations. On the other hand, access to local and remote data is provided by the same syntax. Thus, Titanium strikes a balance between the HPF and ZPL approaches, making communication explicit in declarations but allowing the same code fragments to operate on local and remote data. Recent work in Titanium has replaced the flat SPMD model with the more hierarchical Recursive Single-Program, Multiple-Data (RSPMD) model [40]. This model groups together data and execution contexts into teams that are arranged in hierarchical structures, which match the structure of recursive and compositional algorithms and emerging hierarchical architectures. While the total set of threads is fixed at startup as in SPMD, hierarchical teams can be created dynamically, and threads can enter and exit teams as necessary. Titanium provides a mechanism for querying the machine structure at runtime, allowing the same program to target different platforms by building the appropriate team structure during execution. Other work has been done to address the limitations of the flat SPMD model in the context of Phalanx [23] and UPC++ [76], both active libraries for C++. The Phalanx library uses the Hierarchical Single-Program, Multiple-Data (HSPMD) model, which is a hybrid of SPMD and dynamic tasking. The HSPMD model retains the cooperative nature of SPMD by allowing thread teams, as in RSPMD, but it allows new teams of threads to be spawned dynamically. Unlike SPMD and RSPMD, the total set of executing threads is not fixed at startup. Both RSPMD and HSPMD allow expression of locality and concurrency at multiple levels, although through slightly different mechanisms, allowing the user to take advantage of hierarchical architectures. The UPC++ library uses RSPMD as its basic execution model but additionally allows X10-style asynchronous tasks to be spawned at remote locations. This allows execution to be moved dynamically to where data are located and adds a further degree of adaptability to the basic bulk-synchronous Compilations of both local-by-default and global-by-default languages can be facilitated with recent development in polyhedral analysis, which allows the compiler to model the iteration space and all data dependencies for so-called affine code regions. An affine region is a block of code where all loop iteration variables and array accesses can be modeled by affine functions in Presburger arithmetic [41]. The polyhedral program representation can be used to automatically parallelize programs [9] and more recently automatically map them to complex accelerator memory hierarchies [27], [69]. 5 Task-based Runtime Approaches for Data Locality Traditionally, task-based runtime systems have been used to enable a problem-centric description of an application’s parallelism while hiding the details of task scheduling to complex architectures from the programmer. This separation of concerns is probably the most important reason for the success of using runtime environment systems for task models. It enables developers to taskify their applications while focusing on the scientific algorithms they are most familiar with. This paradigm breaks the standard bulk synchronous programming model inherent to runtimes supporting many state-of-the-art languages (e.g., PGAS), as previously mentioned in Section 3. Programmers delegate all responsibilities related to efficient execution to the task scheduling runtime thereby achieving higher productivity and portability across architectures. In light of the growing importance of locality management, runtime systems will need to move past only considering task-centric attributes (load balance, etc.) to ones that take into account data-centric attributes (data movement, memory bandwidth, etc.). 5.1 Key Points At a very basic level, a locality-aware runtime is responsible for mapping the abstract expression of tasks and data at the application level to hardware resources, both compute and memory. The important question, however, is where one draws the line between the programmer (or higher-level abstraction), the runtime and the hardware. Traditionally, hardware has managed a lot of the locality (through the use of cache), but this is shifting as, necessarily, hardware can implement only a limited number of schemes that may not be adapted to all application patterns. Although the exact borders of a locality-aware runtime remain the subject of healthy research, researchers agree that with exascale systems, locality-aware runtimes will need greater cooperation between software and hardware. 5.1.1 Runtime involvement in data locality Data locality is relevant at three levels: the expression of parallelism in the application, the association of this expressed parallelism and the data, and the mapping of the tasks and data to computing and memory resources. Parallelism can be expressed in either a data centric or a task centric view. In the former case, parallelism is expressed mostly through the chunking of data into subsets that can independently be operated on, whereas in the latter case, parallelism is obtained through the chunking of the computation into independent subsets. The expression of parallelism is usually done outside the runtime either directly by the programmer or with the help of higher-level toolchains. Whether the application has been divided in a task-centric or a data-centric manner, data and tasks need to be respectively associating tasks and/or data with the chunks and making runtime aware of them with the respective task chunks and data chunks identified in the first step. Whether the chunking is task-centric or data-centric, the association between specific data or task and their respective chunks must be made known to the runtime. The question of task and data granularity also comes up at this level, but additional information is available to answer it: at this stage, tasks and data are associated so the runtime has more information to determine the optimal granularity level taking into account both computing resources and memory resources. For example, the presence of vector units and GPU warps may push for the coarsening of tasks to be able to fully occupy these units; this will be, in turn, limited by the amount of memory (scratchpads, for example) that is close by to feed the computation units. Considerations such as over-provisioning which provides parallel slack and helps ensure progress, will also factor in granularity decisions at this level. The third level involving data locality is scheduling. Previous efforts in resource allocation have frequently focused on improving the performance of an application for a particular machine, for example, by optimizing for cache size, memory hierarchy, number of cores, or network interconnect topology and routing. This is most efficiently done with static scheduling. Task-based runtimes that schedule statically may still make use of specific knowledge of the machine to perform their scheduling decisions. Static scheduling of tasks has several advantages over dynamic scheduling provided a precise enough model of the underlying computing and networking resources is available. The static approach will become more difficult with increase in machine variability. Therefore, while static or deterministic scheduling enables offline data locality optimizations, the lack of a dependable machine model may make the benefits of dynamic scheduling, namely, adaptability and load-balancing, more desirable. Of course, in the presence of a machine model, the dynamic scheduling may also take advantage of the machine hardware specifications by further refining its runtime decisions. This holistic data-locality strategy at the system level is further explained in Section 6. 5.1.2 Abstractions for Locality Task-based programming models are notoriously difficult to reason about and debug given that the parameters specified by the programmer to constrain execution (dependencies) purposefully allow for a wide range of execution options. Certain task-based runtime systems, which allow the dynamic construction of the task-graph (such as OCR), only exacerbate these problems. Tools allowing the programmer to understand the execution of a task-based program need to be developed. This is particularly true when the decisions taken by these runtimes will be more and more impacted by data locality considerations that may be obscure to the end user. These tools will need to cover two broad areas: 1) information on the execution flow of the application in terms of the tasks and data-elements defined by the user and, more importantly 2) information about the mapping of those tasks and data-elements to the computing and memory resources. For instance, OmpSs [4] and its dynamic runtime Nanos++ comes with substantial supports for performance analysis in the form of instrumentation tools for tracing and profiling of the task executions. In particular, the core instrumentation package Extrace [2] and the flexible data browser Paraver [3] provide useful insights on task scheduling and hardware usage in order to help the application developer identifying potential performance bottlenecks. Nested or recursive algorithmic formulation as in cache oblivious algorithms [20] is a well-known technique to increase data reuse at the high levels of the memory hierarchy and, therefore, to reduce memory latency overheads. This often requires slight changes in the original algorithm. Nested parallelism can also enable a smart runtime to determine an optimal level of granularity based on the hardware available. This does require, however, that the runtime be made aware of the hierarchical nature of the tasks and data so that it may properly co-schedule iterations that share the same data. This approach is well suited for applications that have well defined data domains that can be easily divided (spatial decomposition, for example). For algorithms that do not expose as much structure, nested parallelism may not be suited. A more generalized notion of closeness is needed: programmers need to be able to express a certain commonality between tasks in terms of their data. The reason is that reducing data movement needs to happen at all levels of the system architecture in order to be effective: from the single CPU socket within a multi-socket shared-memory node up to multiple distributed-memory nodes linked through the high-performance network interconnect. This bottom-up approach highlights the need for programmers to expose various levels of closeness so that a runtime can map the application’s abstract structure to the concrete hardware instance it is executing on, as detailed by the machine model. Many numerical algorithms are often built on top of optimized basic blocks. For instance, dense eigensolvers require three computational stages: matrix reduction to condensed form, an iterative solver to extract the eigenvalues, and back transformation to get the associated eigenvectors. Each stage corresponds to an aggregation of several computational kernels, which may already be optimized independently for data locality. However, the ability to express locality constraints across the various steps is important. In other words, the way locality can be composed is important to express. 5.2 State of the Art Standardization is widely considered desirable, but there is a disagreement as to the level at which this standardization should happen. One option is to standardize the APIs at the runtime level in a way similar to the Open Community Runtime (OCR) [36]. Another option is to standardize the interface of the programming model, as OpenMP or OmpSs [4] do. Currently there is no clear reason to decide for a particular scheme, so both approaches are being actively researched. A locality-aware runtime needs to know about associations of data and tasks in order to simultaneously enable scheduling tasks and placing data. This association can be explicitly specified by the user (for example in OCR), discovered by an automated tool (for example, with the RStream compiler [46]), extracted from a more high-level specification from the user (Legion [5], HTA [8], RAJA [33], OpenMP, etc.), or from the application meta-data as in Perilla [50]. Another big challenge is how to communicate the hierarchical data properties of an application to the runtime so that they can be exploited to generate efficient schedules. Classical random work stealers (e.g., Cilk) do not exploit this. Socket-aware policies exist (e.g., Qthread [52]) that perform hierarchical work stealing: first among cores in a socket and then among sockets. Some programming models expose an API that allows programmers to specify on which NUMA node/socket a collection of tasks should be executed (e.g., OmpSs [4]). Configurable work stealers that can be customized with scheduling hints have also been developed [71]. A more extreme option is to allow the application programmer to attach a custom work-stealing function to the application [48]. 6 System-Scale Data Locality Management The highest level in the stack is the whole system, which usually comprises a complex topology ranging from on-chip networks to datacenter-wide interconnection topologies. Optimizing for locality during program execution at this level is equally important to all other levels. 6.1 Key Points System-scale locality management consists of optimizing application execution, taking into account both the data access of the application and the topology of the machine to reduce node-level data movement. Therefore, in order to enable such optimization two kinds of models are required: an application model and an architecture model. At system scale one must describe the whole ecosystem. Among the relevant elements of the ecosystem are: the cache hierarchy; the memory system and its different operating modes such as slow and large vs. fast and small, or persistent vs. volatile; the operating system; the network with concerns such as protocol, topology, addressing, and performance; the storage with its connected devices and strategy; and the batch scheduler which has the knowledge of available resources, and other applications running that may interfere with execution. Applications need abstractions allowing them to express their behavior and requirements in terms of data access, locality and communication at runtime. For these, we need to define metrics to capture the notions of data access, affinity, and network traffic. Defining metrics to describe the application behavior in a concise and precise manner is still a research topic. Often, an affinity graph describing how the different parts of the application interact is useful in managing locality. However, such affinity can be positive (components need to be mapped close together due to shared data) or negative (components need to be spread across the system because of potential contention when accessing shared resources: memory, storage, network, etc.). A good model of affinity is not yet available in the literature. A hardware model is needed to control locality. Modeling future large-scale parallel machines will have to describe the memory system better, provide an integrated view with the nodes and the network. The models will also need to exhibit qualitative knowledge, and provide ways to express the multiscale properties of the machine. 6.1.1 Trends and Requirements We can see different trends affecting the way locality is managed at system scale. Concerning node and topology modeling, we note that even if most NUMA systems are mostly hierarchical, this is no longer true when we consider the network. Moreover, manycore architecture such as the Intel Knights Landing do not feature a strict hierarchical memory. This means that process placement algorithms need to be able to address arbitrary topologies. Moreover, even large networks often have low diameter (e.g., diameter-3 Dragonfly [42] or diameter-2 Slim Fly [6] topologies). Therefore, topology mapping could become less important in some cases as the placement may have a smaller impact on the performance or simple random strategies provide close-to-highest performance. Yet, this is not generally true for low-diameter topologies [59]. Precise models of application behavior and the underlying platform are needed in order to understand how placement and data layout impact performance. In the case of very large machines such as top-end supercomputers featuring millions of cores, the algorithmic cost of process placement becomes very high. Being able to design hierarchical algorithms is required in that setting. Another important consideration is the ability to deal with dynamic behavior. Topology-aware dynamic load balancing is a hot topic [37], [58], which concerns itself with managing change in application behavior and coping with affinity dependence in the input dataset. This requires modification of the affinity modeling from a static model (e.g., the same communication matrix for the whole application execution) to a dynamic model (e.g., instantiating the communication matrix at runtime). At system scale it is important to manage affinity for the whole application ecosystem. Currently, locality is managed independently for the volatile memory, the NVRAM, the storage, the network, etc. It is crucial to account for these different resources at the same time to perform global locality optimizations. For instance, optimizing storage access and memory access simultaneously results in good performance gain as shown in early results [64]. Additionally, research into the layer above the parallel file system is beginning to uncover methods of orchestrating I/O between applications [16]. This type of high-level coordination can assist in managing shared resources such as network links and I/O gateways and is complementary to an understanding of the storage data layout itself. It can also enable optimization of locality management for several applications at the same time. 6.2 State of the Art No strict standard way exists to describe and enforce process and thread mapping. For example, techniques for thread binding depend on the underlying operating system, the runtime system (MPI, PGAS, etc.), and even the implementation (e.g., OpenMPI vs. MPICH). Arguably some progress has been made, for example, MPI-3 provides an interface that allows one to detect which processes are in a shared-memory domain (i.e., on the same node) [32]. Other interfaces, for example, thread binding at startup, are not standardized, but MPI allows them to be implemented at the mplexec level. Modeling the data-movement requirements of an application in terms of network traffic and I/O can be supported through performance-analysis tools such as Scalasca [24] for distributed memory or performance counter analysis for shared-memory systems. It can also be done by tracing data exchange at the runtime level with a system such as OVIS [53], [63], by monitoring the messages transferred between MPI processes, for instance. Hardware locality (hwloc) [25], [34] is a library and a set of tools for discovering and exposing the hardware topology of machines, including processors, cores, threads, shared caches, NUMA memory nodes, and I/O devices. Netloc [26], [49] is a network model extension of hwloc to account for locality requirements of the network, including the fabric topology. For instance, the network bandwidth and the way contention is managed may change the way the distance within the network is expressed or measured. The problem is even more important if we consider the way applications are allocated to resources and how they access storage. This requires optimizations between applications. Currently, resource managers or job schedulers such as SLURM [74], OAR [11], LSF [77], or PBS [30] allocate nodes to processes. However, none of them can match the application requirements in terms of communication with the topology of the machine and the constraints incurred by already mapped applications. Similarly, parallel file systems such as Lustre [10], GPFS [61], PVFS [12], and PanFS [70] and I/O libraries such as ROMIO [65], HDF5 [29], and Parallel netCDF [44] are responsible for organizing data on external storage (e.g., disks) and moving data between application memory and external storage over system networks. 7 Applications Expectations from Abstractions An application developer is concerned with end-to-end parallelization and may be faced with different parallelization needs in different parts of the application [62]. Data locality for applications is often a direct map from their modeling and discretization methods. We can loosely map the applications along two dimensions: spatial connectivity and functional connectivity. In this map the lower end of the spatial connectivity axis would have applications that are embarrassingly parallel and the top end would have dynamic connectivity such as adaptive meshing. The functional connectivity axis would have single physics applications at the lower end, whereas at the high end would be applications where the components are swapped in and out of active state. Being placed higher along an axis implies greater challenges in achieving locality. HPC applications typically fall into the fourth quadrant, both spatial and functional connectivities are high [67]. Applications communities have well known and valid concerns about wisely utilizing the developers time and protecting the investment already made in the mature production codes of today [13], [31]. An important consideration for the applications community, therefore, is the time scale of change in paradigms in the platform architecture and major rewrites of their codes. Even with those constraints, however, many possibilities exist in application infrastructure design to expose the potential for data locality, and therefore performance, if appropriate abstractions can be made available. A stable programming paradigm with a lifecycle that is several times the development cycle of the code must emerge for sustainable science. It can take any of the forms under consideration, such as embedded domain-specific languages, abstraction libraries, or full languages, or some combination of these, as long as long term support and commitment are provided, as well as a way to make incremental transition to the new paradigm. 7.1 Overview of Concerns Abstractions often apply easily to simple problems; but where the computation deviates from the simple pattern, the effectiveness of the abstraction decreases. A useful abstraction would allow itself also to be ignored or turned off as needed. In the context of data locality that might mean an ability to express the inherent hierarchical parallelism in the application in a declarative instead of imperative way, leaving the code translators (compilers or autotuners) to carry out the actual mapping. Other less considered but possibly equally critical concerns relate to expressibility. Application developers can have a clear notion of their data model without finding ways of expressing the models effectively in the available data structures and language constructs. There is no theoretical basis for the analysis of data movement within the local memory or remote memory. Because of this lack of formalism to inform application developers about the implications of their choices, the data structures get locked into the implementation before the algorithm design is fully fleshed out. The typical development cycle of a numerical algorithm focuses on correctness and stability first, and then performance. By the time performance analysis tools are applied, it can be too late for anything but incremental corrective measures, which usually reduce the readability and maintainability of the code. A better approach would be to model the expected performance of a given data model before completing the implementation and to let the design be informed by the expected performance model throughout the process. Such a modeling tool would need to be highly configurable, so that its conclusions might be portable across a range of compilers and hardware and valid into the future, in much the same way that numerical simulations often use ensembles of input-parameter space in order to obtain conclusions with reduced bias. Below we discuss application developers’ concerns that tie into the data locality abstractions discussed in earlier sections. 7.2 Data Structures Data layout and movement have a direct impact on the implementation complexity and performance of an application. Since these are determined by the data structures used in the implementation, this is an important concern for the application. Any effort that moves in the direction of allowing the application to describe the working set through a library or an abstraction is likely to prove useful. Most languages provide standard containers and data structures that are easy to use in high-level code; yet few languages or libraries provide interfaces for the application developer to inform the compiler about expectations of data locality, data layout, or memory alignment. For example, a common concern for the PDE solvers is the data structure containing multiple field components that have identical spatial layout. Should it be an array with an added dimension for the field components or a structure; and within the array or structure, what should be the order for storing in memory for performance [17], [18]. There is no one best layout for every platform. State of the art abstractions and tools described in Section 3 are working towards making that a programming abstraction concern instead of an application concern. Other abstractions that could be helpful for performance include allowing persistence of data between two successive code modules. 7.3 Languages and Compilers The state of the art in parallel programming models currently used in applications is a hybrid model such as MPI+OpenMP or MPI+CUDA/OpenCL. The former is both local-by-default (MPI) and global-by-default (OpenMP), while the latter is local-by-default only (object visibility defined in Section 4). Since the two models target different classes of platforms, they do not really overlap. PGAS models have much less penetration in the field than do the above two models. In general a global-by-default model is easier to adopt but it is much harder to make it performant. The real difficulty in designing for parallelism lies in finding the best hierarchical decomposition inherent to the application. That is basically the hierarchical version of the local-by-default approach. Abstractions such as tiling can be helpful in expressing hierarchical parallelism. Because of being explicitly aware of locality, local-by-default design can be more easily mapped to a performant global design. The transition to a new programming language, although likely to be optimal eventually, is not a realistic solution in the near term. In addition to the usual challenge of sustainability (it might go away), need for verification dictates incremental adoption for existing codes. Therefore, either embedded DSLs or new languages with strong interoperability with the existing languages are likely to have better chance at being adopted. The amount of effort required by the applications to transition to the new programming model will be another factor in its success. Irrespective of which solution emerges, it must provide a robust and clean way of handling threads for interoperability among various components of the application. Also, just-in-time compilation will be helpful to many applications with highly variable runtime characteristics. 7.4 Runtime The vast majority of applications in computational science and engineering continue to operate in largely bulk-synchronous mode, with a few notable exceptions such as Uintah [54], and applications built upon Charm++ [39] such as NAMD [57]. As the applications see it, this approach has two major benefits: many applications have a built in regularity, and therefore map well to the bulk-synchronous mode, and it takes care of dependencies within the application trivially. Evidence indicates, however, that this state of affairs may not remain attractive or even feasible because heterogeneity in hardware is unfavorable to regulate lock-step execution. Additionally, capability additions in applications make them more heterogeneous. However, the jury is still out on whether the overheads of asynchronicity will be outweighed by the benefits of pipelining and overlapping permitted by the task-based runtime. A good API that allows articulating the hierarchical decomposition and dependencies easily is likely to be helpful to applications to think about runtime locality, and to reason about their code functionally without implementing it in a functional language. Such an approach is needed to their way away from bulk synchronism. 7.5 System-Scale System-wide scalability is an important cross-cutting issue since the targets are very large-scale, high-performance computers. On the one hand, application scalability will depend mostly on the way data is accessed and locality is managed. On the other hand, the proposed solutions and mechanisms have to run at the same scale as the application which limits their inner decision time. That, in turn, makes it important to tackle the problem for the whole system: taking into account the whole ecosystem of the application (e.g., storage, resource manager) and the whole architecture (i.e., from cores to network). Novel approaches are needed to control data locality system wide, by integrating cross-layer I/O stack mechanisms with cross-node topology-aware mechanisms. Another challenge is that often each layer of the software stack is optimized independently to address the locality concerns with the result that outcomes sometime conflict. It is therefore important to observe the interaction of different approaches and propose integrated solutions that provide a global optimization across different layers. An example of such an approach is mapping independent application data accesses to a set of storage resources in a balanced manner. This approach requires an ability to interrogate the system regarding what resources are available, some distance metric in terms of application processes, and coordination across those processes (perhaps supported by a system service) to perform an appropriate mapping. Ultimately, the validation of the models and solutions to the concerns and challenges will be a key challenge. 8 Summary The objective of the series of workshops on Programming Abstractions for Data Locality (PADAL) is to form a community of researchers with the notion that data locality comes first as the primary organizing principle for computation. This paradigm shift from compute-centric towards data-centric specification of algorithms has upended assumptions that underpin our current programming environments. Parallelism is inextricably linked to data locality, and current programming abstractions are centered on abstractions for compute (threads, processes, parallel do-loops). The time has arrived to embrace data locality as being the anchor for computation. PADAL has identified a community that is actively exploring a wide-open field of new approaches to describing computation and parallelism in a way that conserves data movement. A number of these projects have produced working technologies that are rapidly approaching maturity. During this early phase of development, it is crucial to establish research collaborations that leverage for commonalities and opportunities for inter-operation between these emerging technologies. Much research in this area (as with all emerging fields of research) has focused on rapidly producing implementations to demonstrate the value of data-centric programming paradigms. In order to get to the next level of impact, there is a benefit to formalizing the abstractions for representing data layout patterns and the mapping of computation to the data where it resides. It is our desire to create standards that promote interoperability between related programming systems and cooperation to ensure all technology implementations offer the most complete set of features possible for a fully functional programming environment. The only way to achieve these goals is for this community to organize, consider our impact on the design of the software stack at all levels, and work together towards the goal of creating interoperable solutions that contribute to a comprehensive environment. Acknowledgments Authors would like to thank other PADAL14 and PADAL15 workshop participants: Maciej Besta, Jed Brown, Cy Chan, Sung-Eun Choi, Jack Choquette, Brice Goglin, Jesus Labarta, Leonidas Linardakis, Edward Luke, Satoshi Matsuoka, Peter Messmer, Lawrence Mitchell, Kathryn O’Brien, David Padua, Robert B. Ross, Marie-Christine Sawley, Robert Schreiber, Thomas Schulthess, James Sexton, Suzanne Michelle Shontz, Adrian Tate, Gyisi Tobias, Ergo Toshio, Mohamed Wahib, Chih-Chieh Yang. This work was partially supported by the German Research Foundation under contract TE 163/17-1. This work was partially supported by the Grant 659965 by the European Commission. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS References Didem Unat is an Assistant Professor of Computer Science and Engineering at Koç University, Istanbul, Turkey. Previously she was at the Lawrence Berkeley National Laboratory. She is the recipient of the Luis Alvarez Fellowship in 2012 at the Berkeley Lab. Her research interest lies primarily in the area of high performance computing, parallel program- ming models, compiler analysis and performance modeling. Visit her group webpage for more information parcorelab.ku.edu.tr. Anshu Dubey is a Computer Scientist in the Mathematics and Computer Science Division at Argonne National Laboratory, and a Senior Fellow at the Computation Institute. From 2013 to 2015 she was at Lawrence Berkeley National Laboratory, where she served as work lead and computer systems engineer. Prior to that she was associate director and computer science/applications group leader in Flash Center For Computation Science at the University of Chicago. She received her Ph.D. in Computer Science from Old Dominion University in 1993 and a B.Tech in Electrical Engineering from Indian Institute of Technology, New Delhi. Torsten Hoefler is an Assistant Professor of Computer Science at ETH Zürich, Switzerland. He is active in the Message Passing Interface (MPI) Forum where he chairs the Collective Operations and Topologies working group. His research interests revolve around the central topic of “Performance-centric Software Development” and include scalable networks, parallel programming techniques, and performance modeling. Additional information about Torsten can be found on his homepage at htor.inf.ethz.ch. John Shalf is CTO of the National Energy Research Supercomputing Center and head of the Computer Science Department at Lawrence Berkeley National Laboratory. His research interests include parallel computing software and high-performance computing technology. Shalf received a MS in electrical and computer engineering from and Virginia Tech. He is a member of the American Association for the Advancement of Science, IEEE, and the Optical Society of America, and coauthor of the whitepaper The Landscape of Parallel Computing Research: A View from Berkeley (UC Berkeley, 2006). Contact him at jshalf@lbl.gov. Mark Abraham is a research scientist at the KTH Royal Technical University in Stockholm, Sweden. He manages the worldwide development of the high-performance molecular simulation package GROMACS. Mauro Bianco is currently a Computer Scientist at Swiss National Supercomputing Centre. His focus is on the design and development of domain specific C++ libraries for parallel and portable scientific simulations. Bradford L. Chamberlain is a Principal Engineer at Cray Inc. where he serves as the technical lead for the Chapel parallel programming language project. He earned his PhD from the Department of Computer Science and Engineering at the University of Washington and remains associated with the department as an Affiliate Professor. Romain Cledat is currently a leading developer on the Open Community Runtime (OCR) as part of DoE’s XStack project which aims to develop the software infrastructure for Exascale computing. Romain graduated in 2011 from the Georgia Institute of Technology with a PhD in Computer Science. He also holds a MS in Electrical and Computer Engineering from Georgia Tech and a Masters in Engineering from the Ecole Centrale de Lyon (France). H. Carter Edwards is a Principal Member of Technical Staff at Sandia National Laboratories in Albuquerque, New Mexico, where he leads research in distributed memory resource management. He has won R&D 100, US Patent, and Federal-Laboratory-Consortium Excellence-in-Technology-Transfer Awards for work in this area. He is a Senior Member of the ACM and has been a Member of Technical Staff at Bell Laboratories in Holmdel, New Jersey and a Regents Dissertation Fellow at the University of California. Naoya Maruyama is a Team Leader at RIKEN Advanced Institute for Computational Science, where he leads the HPC Programming Framework Research Team. His team focuses on high-level parallel frameworks for computational science applications. Hatem Ltaief is a Senior Research Scientist in the Extreme Computing Research Center at KAUST. His research interests include parallel numerical algorithms, fault tolerant algorithms, parallel programming models, and performance optimizations for multicore architectures and hardware accelerators. His current research collaborators include Aramco, Total, Observatoire de Paris, NVIDIA, and Intel. Chris J Newborn serves as an HPC architect, focused on Intel’s Xeon Phi product family. He has contributed to a combination of hardware and software technologies over the last twenty years. He has a passion for making the richness of heterogeneous platforms easier to use. He has over 80 patents. He wrote a binary-optimizing, multi-grained parallelizing compiler as part of his Ph.D. at Carnegie Mellon University. He’s delighted to have worked on volume products that his Mom uses. Frank Hannig (M’01–SM’12) leads the Architecture and Compiler Design Group in the CS Department at Friedrich-Alexander University Erlangen-Nürnberg (FAU), Germany. His main research interests are the design of massively parallel architectures, ranging from dedicated hardware to multi-core architectures, domain-specific computing, and architecture/compiler co-design. Emmanuel Jeannot is a senior research scientist at Inria, in Bordeaux, France. He leads the Tadaam (Topology-Aware System-Scale Data Management for High-Performance Computing) project team and works on runtime system and process placement.
{"Source-Url": "https://inria.hal.science/hal-01621371/document", "len_cl100k_base": 12988, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46801, "total-output-tokens": 18975, "length": "2e13", "weborganizer": {"__label__adult": 0.0004100799560546875, "__label__art_design": 0.0007138252258300781, "__label__crime_law": 0.00038552284240722656, "__label__education_jobs": 0.002147674560546875, "__label__entertainment": 0.00015461444854736328, "__label__fashion_beauty": 0.0002548694610595703, "__label__finance_business": 0.0004391670227050781, "__label__food_dining": 0.0004153251647949219, "__label__games": 0.0008854866027832031, "__label__hardware": 0.003021240234375, "__label__health": 0.0008797645568847656, "__label__history": 0.0006670951843261719, "__label__home_hobbies": 0.0001983642578125, "__label__industrial": 0.0010204315185546875, "__label__literature": 0.00036787986755371094, "__label__politics": 0.00048661231994628906, "__label__religion": 0.0008268356323242188, "__label__science_tech": 0.414794921875, "__label__social_life": 0.00013124942779541016, "__label__software": 0.00958251953125, "__label__software_dev": 0.56005859375, "__label__sports_fitness": 0.0004427433013916016, "__label__transportation": 0.0012578964233398438, "__label__travel": 0.0002923011779785156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 85154, 0.03555]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 85154, 0.4082]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 85154, 0.89466]], "google_gemma-3-12b-it_contains_pii": [[0, 1127, false], [1127, 4624, null], [4624, 11387, null], [11387, 14281, null], [14281, 20118, null], [20118, 26460, null], [26460, 33100, null], [33100, 39471, null], [39471, 45770, null], [45770, 52075, null], [52075, 58600, null], [58600, 64489, null], [64489, 72880, null], [72880, 81133, null], [81133, 85154, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1127, true], [1127, 4624, null], [4624, 11387, null], [11387, 14281, null], [14281, 20118, null], [20118, 26460, null], [26460, 33100, null], [33100, 39471, null], [39471, 45770, null], [45770, 52075, null], [52075, 58600, null], [58600, 64489, null], [64489, 72880, null], [72880, 81133, null], [81133, 85154, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 85154, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 85154, null]], "pdf_page_numbers": [[0, 1127, 1], [1127, 4624, 2], [4624, 11387, 3], [11387, 14281, 4], [14281, 20118, 5], [20118, 26460, 6], [26460, 33100, 7], [33100, 39471, 8], [39471, 45770, 9], [45770, 52075, 10], [52075, 58600, 11], [58600, 64489, 12], [64489, 72880, 13], [72880, 81133, 14], [81133, 85154, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 85154, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
1a3bdbe189b1a6d6b6ead7d722a0885d11b94fd4
Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J. Franklin, Scott Shenker, Ion Stoica University of California, Berkeley Abstract We present Resilient Distributed Datasets (RDDs), a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner. RDDs are motivated by two types of applications that current computing frameworks handle inefficiently: iterative algorithms and interactive data mining tools. In both cases, keeping data in memory can improve performance by an order of magnitude. To achieve fault tolerance efficiently, RDDs provide a restricted form of shared memory, based on coarse-grained transformations rather than fine-grained updates to shared state. However, we show that RDDs are expressive enough to capture a wide class of computations, including recent specialized programming models for iterative jobs, such as Pregel, and new applications that these models do not capture. We have implemented RDDs in a system called Spark, which we evaluate through a variety of user applications and benchmarks. 1 Introduction Cluster computing frameworks like MapReduce [10] and Dryad [19] have been widely adopted for large-scale data analytics. These systems let users write parallel computations using a set of high-level operators, without having to worry about work distribution and fault tolerance. Although current frameworks provide numerous abstractions for accessing a cluster’s computational resources, they lack abstractions for leveraging distributed memory. This makes them inefficient for an important class of emerging applications: those that require data reuse. For example, Pregel [22] is a system for iterative graph computations that keeps intermediate data in memory, while HaLoop [7] offers an iterative MapReduce interface. However, these frameworks only support specific computation patterns (e.g., looping a series of MapReduce steps), and perform data sharing implicitly for these patterns. They do not provide abstractions for more general reuse, e.g., to let a user load several datasets into memory and run ad-hoc queries across them. In this paper, we propose a new abstraction called resilient distributed datasets (RDDs) that enables efficient data reuse in a broad range of applications. RDDs are fault-tolerant, parallel data structures that let users explicitly persist intermediate results in memory, control their partitioning to optimize data placement, and manipulate them using a rich set of operators. The main challenge in designing RDDs is defining a programming interface that can provide fault tolerance efficiently. Existing abstractions for in-memory storage on clusters, such as distributed shared memory [24], key-value stores [25], databases, and Piccolo [27], offer an interface based on fine-grained updates to mutable state (e.g., cells in a table). With this interface, the only ways to provide fault tolerance are to replicate the data across machines or to log updates across machines. Both approaches are expensive for data-intensive workloads, as they require copying large amounts of data over the cluster network, whose bandwidth is far lower than that of RAM, and they incur substantial storage overhead. In contrast to these systems, RDDs provide an interface based on coarse-grained transformations (e.g., map, filter and join) that apply the same operation to many data items. This incurs substantial overheads due to data replication, disk I/O, and serialization, which can dominate application execution times. Recognizing this problem, researchers have developed specialized frameworks for some applications that require data reuse. For example, Pregel [22] is a system for iterative graph computations that keeps intermediate data in memory, while HaLoop [7] offers an iterative MapReduce interface. However, these frameworks only support specific computation patterns (e.g., looping a series of MapReduce steps), and perform data sharing implicitly for these patterns. They do not provide abstractions for more general reuse, e.g., to let a user load several datasets into memory and run ad-hoc queries across them. In this paper, we propose a new abstraction called resilient distributed datasets (RDDs) that enables efficient data reuse in a broad range of applications. RDDs are fault-tolerant, parallel data structures that let users explicitly persist intermediate results in memory, control their partitioning to optimize data placement, and manipulate them using a rich set of operators. The main challenge in designing RDDs is defining a programming interface that can provide fault tolerance efficiently. Existing abstractions for in-memory storage on clusters, such as distributed shared memory [24], key-value stores [25], databases, and Piccolo [27], offer an interface based on fine-grained updates to mutable state (e.g., cells in a table). With this interface, the only ways to provide fault tolerance are to replicate the data across machines or to log updates across machines. Both approaches are expensive for data-intensive workloads, as they require copying large amounts of data over the cluster network, whose bandwidth is far lower than that of RAM, and they incur substantial storage overhead. In contrast to these systems, RDDs provide an interface based on coarse-grained transformations (e.g., map, filter and join) that apply the same operation to many data items. This allows them to efficiently provide fault tolerance by logging the transformations used to build a dataset (its lineage) rather than the actual data. If a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to recompute... just that partition. Thus, lost data can be recovered, often quite quickly, without requiring costly replication. Although an interface based on coarse-grained transformations may at first seem limited, RDDs are a good fit for many parallel applications, because these applications naturally apply the same operation to multiple data items. Indeed, we show that RDDs can efficiently express many cluster programming models that have so far been proposed as separate systems, including MapReduce, DryadLINQ, SQL, Pregel and HaLoop, as well as new applications that these systems do not capture, like interactive data mining. The ability of RDDs to accommodate computing needs that were previously met only by introducing new frameworks is, we believe, the most credible evidence of the power of the RDD abstraction. We have implemented RDDs in a system called Spark, which is being used for research and production applications at UC Berkeley and several companies. Spark provides a convenient language-integrated programming interface similar to DryadLINQ [31] in the Scala programming language [2]. In addition, Spark can be used interactively to query big datasets from the Scala interpreter. We believe that Spark is the first system that allows a general-purpose programming language to be used at interactive speeds for in-memory data mining on clusters. We evaluate RDDs and Spark through both microbenchmarks and measurements of user applications. We show that Spark is up to $20\times$ faster than Hadoop for iterative applications, speeds up a real-world data analytics report by $40\times$, and can be used interactively to scan a 1 TB dataset with 5–7s latency. More fundamentally, to illustrate the generality of RDDs, we have implemented the Pregel and HaLoop programming models on top of Spark, including the placement optimizations they employ, as relatively small libraries (200 lines of code each). This paper begins with an overview of RDDs (§2) and Spark (§3). We then discuss the internal representation of RDDs (§4), our implementation (§5), and experimental results (§6). Finally, we discuss how RDDs capture several existing cluster programming models (§7), survey related work (§8), and conclude. 2 Resilient Distributed Datasets (RDDs) This section provides an overview of RDDs. We first define RDDs (§2.1) and introduce their programming interface in Spark (§2.2). We then compare RDDs with finer-grained shared memory abstractions (§2.3). Finally, we discuss limitations of the RDD model (§2.4). 2.1 RDD Abstraction Formally, an RDD is a read-only, partitioned collection of records. RDDs can only be created through deterministic operations on either (1) data in stable storage or (2) other RDDs. We call these operations transformations to differentiate them from other operations on RDDs. Examples of transformations include map, filter, and join.2 RDDs do not need to be materialized at all times. Instead, an RDD has enough information about how it was derived from other datasets (its lineage) to compute its partitions from data in stable storage. This is a powerful property: in essence, a program cannot reference an RDD that it cannot reconstruct after a failure. Finally, users can control two other aspects of RDDs: persistence and partitioning. Users can indicate which RDDs they will reuse and choose a storage strategy for them (e.g., in-memory storage). They can also ask that an RDD’s elements be partitioned across machines based on a key in each record. This is useful for placement optimizations, such as ensuring that two datasets that will be joined together are hash-partitioned in the same way. 2.2 Spark Programming Interface Spark exposes RDDs through a language-integrated API similar to DryadLINQ [31] and FlumeJava [8], where each dataset is represented as an object and transformations are invoked using methods on these objects. Programmers start by defining one or more RDDs through transformations on data in stable storage (e.g., map and filter). They can then use these RDDs in actions, which are operations that return a value to the application or export data to a storage system. Examples of actions include count (which returns the number of elements in the dataset), collect (which returns the elements themselves), and save (which outputs the dataset to a storage system). Like DryadLINQ, Spark computes RDDs lazily the first time they are used in an action, so that it can pipeline transformations. In addition, programmers can call a persist method to indicate which RDDs they want to reuse in future operations. Spark keeps persistent RDDs in memory by default, but it can spill them to disk if there is not enough RAM. Users can also request other persistence strategies, such as storing the RDD only on disk or replicating it across machines, through flags to persist. Finally, users can set a persistence priority on each RDD to specify which in-memory data should spill to disk first. 2.2.1 Example: Console Log Mining Suppose that a web service is experiencing errors and an operator wants to search terabytes of logs in the Hadoop filesystem (HDFS) to find the cause. Using Spark, the operator can load just the error messages from the logs into RAM across a set of nodes and query them interactively. She would first type the following Scala code: She would first type the following Scala code: lines = spark.textFile("hdfs://...") errors = lines.filter(_.startsWith("ERROR")) errors.persist() Line 1 defines an RDD backed by an HDFS file (as a collection of lines of text), while line 2 derives a filtered RDD from it. Line 3 then asks for errors to persist in memory so that it can be shared across queries. Note that the argument to filter is Scala syntax for a closure. At this point, no work has been performed on the cluster. However, the user can now use the RDD in actions, e.g., to count the number of messages: errors.count() The user can also perform further transformations on the RDD and use their results, as in the following lines: // Count errors mentioning MySQL: errors.filter(_.contains("MySQL")).count() // Return the time fields of errors mentioning // HDFS as an array (assuming time is field // number 3 in a tab-separated format): errors.filter(_.contains("HDFS")) .map(_.split("\t")(3)) .collect() After the first action involving errors runs, Spark will store the partitions of errors in memory, greatly speeding up subsequent computations on it. Note that the base RDD, lines, is not loaded into RAM. This is desirable because the error messages might only be a small fraction of the data (small enough to fit into memory). Finally, to illustrate how our model achieves fault tolerance, we show the lineage graph for the RDDs in our third query in Figure 1. In this query, we started with errors, the result of a filter on lines, and applied a further filter and map before running a collect. The Spark scheduler will pipeline the latter two transformations and send a set of tasks to compute them to the nodes holding the cached partitions of errors. In addition, if a partition of errors is lost, Spark rebuilds it by applying a filter on only the corresponding partition of lines. 2.3 Advantages of the RDD Model To understand the benefits of RDDs as a distributed memory abstraction, we compare them against distributed shared memory (DSM) in Table 1. In DSM systems, applications read and write to arbitrary locations in a global address space. Note that under this definition, we include not only traditional shared memory systems [24], but also other systems where applications make fine-grained writes to shared state, including Piccolo [27], which provides a shared DHT, and distributed databases. DSM is a very general abstraction, but this generality makes it harder to implement in an efficient and fault-tolerant manner on commodity clusters. The main difference between RDDs and DSM is that RDDs can only be created (“written”) through coarse-grained transformations, while DSM allows reads and writes to each memory location. This restricts RDDs to applications that perform bulk writes, but allows for more efficient fault tolerance. In particular, RDDs do not need to incur the overhead of checkpointing, as they can be recovered using lineage. Furthermore, only the lost partitions of an RDD need to be recomputed upon failure, and they can be recomputed in parallel on different nodes, without having to roll back the whole program. A second benefit of RDDs is that their immutable nature lets a system mitigate slow nodes (stragglers) by running backup copies of slow tasks as in MapReduce [10]. Backup tasks would be hard to implement with DSM, as the two copies of a task would access the same memory locations and interfere with each other’s updates. Finally, RDDs provide two other benefits over DSM. First, in bulk operations on RDDs, a runtime can sched- \[ \text{Table 1: Comparison of RDDs with distributed shared memory.} \] <table> <thead> <tr> <th>Aspect</th> <th>RDDs</th> <th>Dist. Shared Mem.</th> </tr> </thead> <tbody> <tr> <td>Reads</td> <td>Coarse- or fine-grained</td> <td>Fine-grained</td> </tr> <tr> <td>Writes</td> <td>Coarse-grained</td> <td>Fine-grained</td> </tr> <tr> <td>Consistency</td> <td>Trivial (immutable)</td> <td>Up to app / runtime</td> </tr> <tr> <td>Fault recovery</td> <td>Fine-grained and low-overhead using lineage</td> <td>Requires checkpoints and program rollback</td> </tr> <tr> <td>Straggler mitigation</td> <td>Possible using backup tasks</td> <td>Difficult</td> </tr> <tr> <td>Work placement</td> <td>Automatic based on data locality</td> <td>Up to app (runtimes aim for transparency)</td> </tr> <tr> <td>Behavior if not enough RAM</td> <td>Similar to existing data flow systems</td> <td>Poor performance (swapping?)</td> </tr> </tbody> </table> 3Note that reads on RDDs can still be fine-grained. For example, an application can treat an RDD as a large read-only lookup table. 4In some applications, it can still help to checkpoint RDDs with long lineage chains, as we discuss in Section 5.4. However, this can be done in the background because RDDs are immutable, and there is no need to take a snapshot of the whole application as in DSM. As discussed in the Introduction, RDDs are best suited for batch applications that apply the same operation to all elements of a dataset. In these cases, RDDs can efficiently remember each transformation as one step in a lineage graph and can recover lost partitions without having to log large amounts of data. RDDs would be less suitable for applications that make asynchronous fine-grained updates to shared state, such as a storage system for a web application or an incremental web crawler. For these applications, it is more efficient to use systems that perform traditional update logging and data checkpointing, such as databases, RAMCloud [25], Percolator [26] and Piccolo [27]. Our goal is to provide an efficient programming model for batch analytics and leave these asynchronous applications to specialized systems. 3 Spark Programming Interface Spark provides the RDD abstraction through a language-integrated API similar to DryadLINQ [31] in Scala [2], a statically typed functional programming language for the Java VM. We chose Scala due to its combination of conciseness (which is convenient for interactive use) and efficiency (due to static typing). However, nothing about the RDD abstraction requires a functional language. To use Spark, developers write a driver program that connects to a cluster of workers, as shown in Figure 2. The driver defines one or more RDDs and invokes actions on them. Spark code on the driver also tracks the RDDs’ lineage. The workers are long-lived processes that can store RDD partitions in RAM across operations. As we showed in the log mining example in Section 2.2.1, users provide arguments to RDD operations like map by passing closures (function literals). Scala represents each closure as a Java object, and these objects can be serialized and loaded on another node to pass the closure across the network. Scala also saves any variables bound in the closure as fields in the Java object. For example, one can write code like var x = 5; rdd.map(_ + x) to add 5 to each element of an RDD.5 RDDs themselves are statically typed objects parametrized by an element type. For example, RDD[Int] is an RDD of integers. However, most of our examples omit types since Scala supports type inference. Although our method of exposing RDDs in Scala is conceptually simple, we had to work around issues with Scala’s closure objects using reflection [33]. We also needed more work to make Spark usable from the Scala interpreter, as we shall discuss in Section 5.2. Nonetheless, we did not have to modify the Scala compiler. 3.1 RDD Operations in Spark Table 2 lists the main RDD transformations and actions available in Spark. We give the signature of each operation, showing type parameters in square brackets. Recall that transformations are lazy operations that define a new RDD, while actions launch a computation to return a value to the program or write data to external storage. As an example, the following program implements logistic regression [14], a common classification algorithm 5We save each closure at the time it is created, so that the map in this example will always add 5 even if x changes. that searches for a hyperplane \( w \) that best separates two sets of points (e.g., spam and non-spam emails). The algorithm uses gradient descent: it starts \( w \) at a random value, and on each iteration, it sums a function of \( w \) over the data to move \( w \) in a direction that improves it. ```scala val points = spark.textFile(...) .map(parsePoint).persist() var w = // random initial vector for (i <- 1 to ITERATIONS) { val gradient = points.map( p => p.x * (1/(1+exp(-p.y*(w dot p.x)))-1)*p.y ).reduce((a,b) => a+b) w -= gradient } ``` We start by defining a persistent RDD called `points` as the result of a `map` transformation on a text file that parses each line of text into a `Point` object. We then repeatedly run `map` and `reduce` on `points` to compute the gradient at each step by summing a function of the current \( w \). Keeping points in memory across iterations can yield a 20× speedup, as we show in Section 6.1. ### 3.2.2 PageRank A more complex pattern of data sharing occurs in PageRank [6]. The algorithm iteratively updates a rank for each document by adding up contributions from documents that link to it. On each iteration, each document sends a contribution of \( \frac{1}{N} \) to its neighbors, where \( r \) is its rank and \( N \) is its number of neighbors. It then updates its rank to \( \alpha/N + (1-\alpha) \sum C_i \), where the sum is over the contributions it received and \( N \) is the total number of documents. We can write PageRank in Spark as follows: ```scala // Load graph as an RDD of (URL, outlinks) pairs val links = spark.textFile(...) .map(...).persist() val ranks = // RDD of (URL, rank) pairs for (i <- 1 to ITERATIONS) { // Build an RDD of (targetURL, float) pairs // with the contributions sent by each page val contribs = links.join(ranks).flatMap { (url, (links, rank)) => links.map(dest => (dest, rank/links.size)) } // Sum contributions by URL and get new ranks ranks = contribs.reduceByKey((x,y) => x+y) .mapValues(sum => a/N + (1-a)*sum) } ``` This program leads to the RDD lineage graph in Figure 3. On each iteration, we create a new `ranks` dataset based on the `contribs` and `ranks` from the previous iteration and the static `links` dataset. One interesting feature of this graph is that it grows longer with the number --- *Note that although RDDs are immutable, the variables `ranks` and `contribs` in the program point to different RDDs on each iteration.* of iterations. Thus, in a job with many iterations, it may be necessary to reliably replicate some of the versions of ranks to reduce fault recovery times [20]. The user can call persist with a RELIABLE flag to do this. However, note that the `links` dataset does not need to be replicated, because partitions of it can be rebuilt efficiently by rerunning a map on blocks of the input file. This dataset will typically be much larger than ranks, because each document has many links but only one number as its rank, so recovering it using lineage saves time over systems that checkpoint a program’s entire in-memory state. Finally, we can optimize communication in PageRank by controlling the `partitioning` of the RDDs. If we specify a partitioning for `links` (e.g., hash-partition the link lists by URL across nodes), we can partition ranks in the same way and ensure that the `join` operation between `links` and ranks requires no communication (as each URL’s rank will be on the same machine as its link list). We can also write a custom Partitioner class to group pages that link to each other together (e.g., partition the URLs by domain name). Both optimizations can be expressed by calling `partitionBy` when we define `links`: ```python links = spark.textFile(...).map(...) .partitionBy(myPartFunc).persist() ``` After this initial call, the `join` operation between `links` and ranks will automatically aggregate the contributions for each URL to the machine that its link lists is on, calculate its new rank there, and join it with its links. This type of consistent partitioning across iterations is one of the main optimizations in specialized frameworks like Pregel. RDDs let the user express this goal directly. 4 Representing RDDs One of the challenges in providing RDDs as an abstraction is choosing a representation for them that can track lineage across a wide range of transformations. Ideally, a system implementing RDDs should provide as rich a set of transformation operators as possible (e.g., the ones in Table 2), and let users compose them in arbitrary ways. We propose a simple graph-based representation for RDDs that facilitates these goals. We have used this representation in Spark to support a wide range of transformations without adding special logic to the scheduler for each one, which greatly simplified the system design. In a nutshell, we propose representing each RDD through a common interface that exposes five pieces of information: a set of `partitions`, which are atomic pieces of the dataset; a set of `dependencies` on parent RDDs; a function for computing the dataset based on its parents; and metadata about its partitioning scheme and data placement. For example, an RDD representing an HDFS file has a partition for each block of the file and knows which machines each block is on. Meanwhile, the result of a `map` on this RDD has the same partitions, but applies the map function to the parent’s data when computing its elements. We summarize this interface in Table 3. <table> <thead> <tr> <th>Operation</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>partitions()</td> <td>Return a list of Partition objects</td> </tr> <tr> <td>preferredLocations(p)</td> <td>List nodes where partition p can be accessed faster due to data locality</td> </tr> <tr> <td>dependencies()</td> <td>Return a list of dependencies</td> </tr> <tr> <td>iterator(p, parentIterators)</td> <td>Compute the elements of partition p given iterators for its parent partitions</td> </tr> <tr> <td>partitioner()</td> <td>Return metadata specifying whether the RDD is hash/range partitioned</td> </tr> </tbody> </table> Table 3: Interface used to represent RDDs in Spark. The most interesting question in designing this interface is how to represent dependencies between RDDs. We found it both sufficient and useful to classify dependencies into two types: narrow dependencies, where each partition of the parent RDD is used by at most one partition of the child RDD, and wide dependencies, where multiple child partitions may depend on it. For example, `map` leads to a narrow dependency, while `join` leads to wide dependencies (unless the parents are hash-partitioned). Figure 4 shows other examples. This distinction is useful for two reasons. First, narrow dependencies allow for pipelined execution on one cluster node, which can compute all the parent partitions. For example, one can apply a `map` followed by a `filter` on an element-by-element basis. In contrast, wide dependencies require data from all parent partitions to be available and to be shuffled across the nodes using a MapReduce-like operation. Second, recovery after a node failure is more efficient with a narrow dependency, as only the lost parent partitions need to be recomputed, and they can be recomputed in parallel on different nodes. In contrast, in a lineage graph with wide dependencies, a single failed node might cause the loss of some partition from all the ancestors of an RDD, requiring a complete re-execution. This common interface for RDDs made it possible to implement most transformations in Spark in less than 20 lines of code. Indeed, even new Spark users have implemented new transformations (e.g., sampling and various types of joins) without knowing the details of the scheduler. We sketch some RDD implementations below. HDFS files: The input RDDs in our samples have been files in HDFS. For these RDDs, `partitions` returns one partition for each block of the file (with the block’s offset stored in each Partition object), `preferredLocations` gives the nodes the block is on, and `iterator` reads the block. `map`: Calling `map` on any RDD returns a MappedRDD object. This object has the same partitions and preferred locations as its parent, but applies the function passed to map to the parent’s records in its iterator method. union: Calling union on two RDDs returns an RDD whose partitions are the union of those of the parents. Each child partition is computed through a narrow dependency on the corresponding parent.\(^7\) sample: Sampling is similar to mapping, except that the RDD stores a random number generator seed for each partition to deterministically sample parent records. join: Joining two RDDs may lead to either two narrow dependencies (if they are both hash/range partitioned with the same partitioner), two wide dependencies, or a mix (if one parent has a partitioner and one does not). In either case, the output RDD has a partitioner (either one inherited from the parents or a default hash partitioner). 5 Implementation We have implemented Spark in about 14,000 lines of Scala. The system runs over the Mesos cluster manager [17], allowing it to share resources with Hadoop, MPI and other applications. Each Spark program runs as a separate Mesos application, with its own driver (master) and workers, and resource sharing between these applications is handled by Mesos. Spark can read data from any Hadoop input source (e.g., HDFS or HBase) using Hadoop’s existing input plugin APIs, and runs on an unmodified version of Scala. We now sketch several of the technically interesting parts of the system: our job scheduler (§5.1), our Spark interpreter allowing interactive use (§5.2), memory management (§5.3), and support for checkpointing (§5.4). 5.1 Job Scheduling Spark’s scheduler uses our representation of RDDs, described in Section 4. Overall, our scheduler is similar to Dryad’s [19], but it additionally takes into account which partitions of persistent RDDs are available in memory. Whenever a user runs an action (e.g., count or save) on an RDD, the scheduler examines that RDD’s lineage graph to build a DAG of stages to execute, as illustrated in Figure 5. Each stage contains as many pipelined transformations with narrow dependencies as possible. The boundaries of the stages are the shuffle operations required for wide dependencies, or any already computed partitions that can short-circuit the computation of a parent RDD. The scheduler then launches tasks to compute missing partitions from each stage until it has computed the target RDD. Our scheduler assigns tasks to machines based on data locality using delay scheduling [32]. If a task needs to process a partition that is available in memory on a node, we send it to that node. Otherwise, if a task processes a partition for which the containing RDD provides preferred locations (e.g., an HDFS file), we send it to those. For wide dependencies (i.e., shuffle dependencies), we currently materialize intermediate records on the nodes holding parent partitions to simplify fault recovery, much like MapReduce materializes map outputs. If a task fails, we re-run it on another node as long as its stage’s parents are still available. If some stages have become unavailable (e.g., because an output from the “map side” of a shuffle was lost), we resubmit tasks to compute the missing partitions in parallel. We do not yet tolerate scheduler failures, though replicating the RDD lineage graph would be straightforward. Finally, although all computations in Spark currently run in response to actions called in the driver program, we are also experimenting with letting tasks on the cluster (e.g., maps) call the lookup operation, which provides random access to elements of hash-partitioned RDDs by key. In this case, tasks would need to tell the scheduler to compute the required partition if it is missing. 5.2 Interpreter Integration Scala includes an interactive shell similar to those of Ruby and Python. Given the low latencies attained with in-memory data, we wanted to let users run Spark interactively from the interpreter to query big datasets. The Scala interpreter normally operates by compiling a class for each line typed by the user, loading it into the JVM, and invoking a function on it. This class includes a singleton object that contains the variables or functions on that line and runs the line’s code in an initialize method. For example, if the user types `var x = 5` followed by `println(x)`, the interpreter defines a class called `Line1` containing `x` and causes the second line to compile to `println(Line1.getInstance().x)`. We made two changes to the interpreter in Spark: 1. Class shipping: To let the worker nodes fetch the bytecode for the classes created on each line, we made the interpreter serve these classes over HTTP. 2. Modified code generation: Normally, the singleton object created for each line of code is accessed through a static method on its corresponding class. This means that when we serialize a closure referencing a variable defined on a previous line, such as `Line1.x` in the example above, Java will not trace through the object graph to ship the `Line1` instance wrapping around `x`. Therefore, the worker nodes will not receive `x`. We modified the code generation logic to reference the instance of each line object directly. Figure 6 shows how the interpreter translates a set of lines typed by the user into Java objects. 5.3 Memory Management Spark provides three options for storage of persistent RDDs: in-memory storage as deserialized Java objects, in-memory storage as serialized data, and on-disk storage. The first option provides the fastest performance, because the Java VM can access each RDD element natively. The second option lets users choose a more memory-efficient representation than Java object graphs when space is limited, at the cost of lower performance. The third option is useful for RDDs that are too large to keep in RAM but costly to recompute on each use. To manage the limited memory available, we use an LRU eviction policy at the level of RDDs. When a new RDD partition is computed but there is not enough space to store it, we evict a partition from the least recently accessed RDD, unless this is the same RDD as the one with the new partition. In that case, we keep the old partition in memory to prevent cycling partitions from the same RDD in and out. This is important because most operations will run tasks over an entire RDD, so it is quite likely that the partition already in memory will be needed in the future. We found this default policy to work well in all our applications so far, but we also give users further control via a “persistence priority” for each RDD. Finally, each instance of Spark on a cluster currently has its own separate memory space. In future work, we plan to investigate sharing RDDs across instances of Spark through a unified memory manager. 5.4 Support for Checkpointing Although lineage can always be used to recover RDDs after a failure, such recovery may be time-consuming for RDDs with long lineage chains. Thus, it can be helpful to checkpoint some RDDs to stable storage. In general, checkpointing is useful for RDDs with long lineage graphs containing wide dependencies, such as the rank datasets in our PageRank example (§3.2.2). In these cases, a node failure in the cluster may result in the loss of some slice of data from each parent RDD, requiring a full recomputation [20]. In contrast, for RDDs with narrow dependencies on data in stable storage, such as the points in our logistic regression example (§3.2.1) and the edge lists in PageRank, checkpointing may never be worthwhile. If a node fails, lost partitions from these RDDs can be recomputed in parallel on other nodes, at a fraction of the cost of replicating the whole RDD. Spark currently provides an API for checkpointing (a `REPLICATE` flag to `persist`), but leaves the decision of which data to checkpoint to the user. However, we are also investigating how to perform automatic checkpointing. Because our scheduler knows the size of each dataset as well as the time it took to first compute it, it should be able to select an optimal set of RDDs to checkpoint to minimize system recovery time [30]. Finally, note that the read-only nature of RDDs makes --- 8The cost depends on how much computation the application does per byte of data, but can be up to $2 \times$ for lightweight processing. them simpler to checkpoint than general shared memory. Because consistency is not a concern, RDDs can be written out in the background without requiring program pauses or distributed snapshot schemes. 6 Evaluation We evaluated Spark and RDDs through a series of experiments on Amazon EC2, as well as benchmarks of user applications. Overall, our results show the following: - Spark outperforms Hadoop by up to 20× in iterative machine learning and graph applications. The speedup comes from avoiding I/O and deserialization costs by storing data in memory as Java objects. - Applications written by our users perform and scale well. In particular, we used Spark to speed up an analytics report that was running on Hadoop by 40×. - When nodes fail, Spark can recover quickly by rebuilding only the lost RDD partitions. - Spark can be used to query a 1 TB dataset interactively with latencies of 5–7 seconds. We start by presenting benchmarks for iterative machine learning applications (§6.1) and PageRank (§6.2) against Hadoop. We then evaluate fault recovery in Spark (§6.3) and behavior when a dataset does not fit in memory (§6.4). Finally, we discuss results for user applications (§6.5) and interactive data mining (§6.6). Unless otherwise noted, our tests used m1.xlarge EC2 nodes with 4 cores and 15 GB of RAM. We used HDFS for storage, with 256 MB blocks. Before each test, we cleared OS buffer caches to measure IO costs accurately. 6.1 Iterative Machine Learning Applications We implemented two iterative machine learning applications, logistic regression and k-means, to compare the performance of the following systems: - **Hadoop**: The Hadoop 0.20.2 stable release. - **HadoopBinMem**: A Hadoop deployment that converts the input data into a low-overhead binary format in the first iteration to eliminate text parsing in later ones, and stores it in an in-memory HDFS instance. - **Spark**: Our implementation of RDDs. We ran both algorithms for 10 iterations on 100 GB datasets using 25–100 machines. The key difference between the two applications is the amount of computation they perform per byte of data. The iteration time of k-means is dominated by computation, while logistic regression is less compute-intensive and thus more sensitive to time spent in deserialization and I/O. Since typical learning algorithms need tens of iterations to converge, we report times for the first iteration and subsequent iterations separately. We find that sharing data via RDDs greatly speeds up future iterations. ![Figure 7: Duration of the first and later iterations in Hadoop, HadoopBinMem and Spark for logistic regression and k-means using 100 GB of data on a 100-node cluster.](image) **First Iterations** All three systems read text input from HDFS in their first iterations. As shown in the light bars in Figure 7, Spark was moderately faster than Hadoop across experiments. This difference was due to signaling overheads in Hadoop’s heartbeat protocol between its master and workers. HadoopBinMem was the slowest because it ran an extra MapReduce job to convert the data to binary, it and had to write this data across the network to a replicated in-memory HDFS instance. **Subsequent Iterations** Figure 7 also shows the average running times for subsequent iterations, while Figure 8 shows how these scaled with cluster size. For logistic regression, Spark 25.3× and 20.7× faster than Hadoop and HadoopBinMem respectively on 100 machines. For the more compute-intensive k-means application, Spark still achieved speedup of 1.9× to 3.2×. **Understanding the Speedup** We were surprised to find that Spark outperformed even Hadoop with in-memory storage of binary data (HadoopBinMem) by a 20× margin. In HadoopBinMem, we had used Hadoop’s standard binary format (SequenceFile) and a large block size of 256 MB, and we had forced HDFS’s data directory to be on an in-memory file system. However, Hadoop still ran slower due to several factors: 1. Minimum overhead of the Hadoop software stack, 2. Overhead of HDFS while serving data, and ![Figure 8: Running times for iterations after the first in Hadoop, HadoopBinMem, and Spark. The jobs all processed 100 GB.](image) 3. Deserialization cost to convert binary records to usable in-memory Java objects. We investigated each of these factors in turn. To measure (1), we ran no-op Hadoop jobs, and saw that these at incurred least 25s of overhead to complete the minimal requirements of job setup, starting tasks, and cleaning up. Regarding (2), we found that HDFS performed multiple memory copies and a checksum to serve each block. Finally, to measure (3), we ran microbenchmarks on a single machine to run the logistic regression computation on 256 MB inputs in various formats. In particular, we compared the time to process text and binary inputs from both HDFS (where overheads in the HDFS stack will manifest) and an in-memory local file (where the kernel can very efficiently pass data to the program). We show the results of these tests in Figure 9. The differences between in-memory HDFS and local file show that reading through HDFS introduced a 2-second overhead, even when data was in memory on the local machine. The differences between the text and binary input indicate the parsing overhead was 7 seconds. Finally, even when reading from an in-memory file, converting the pre-parsed binary data into Java objects took 3 seconds, which is still almost as expensive as the logistic regression itself. By storing RDD elements directly as Java objects in memory, Spark avoids all these overheads. 6.2 PageRank We compared the performance of Spark with Hadoop for PageRank using a 54 GB Wikipedia dump. We ran 10 iterations of the PageRank algorithm to process a link graph of approximately 4 million articles. Figure 10 demonstrates that in-memory storage alone provided Spark with a 2.4 x speedup over Hadoop on 30 nodes. In addition, controlling the partitioning of the RDDs to make it consistent across iterations, as discussed in Section 3.2.2, improved the speedup to 7.4 x. The results also scaled nearly linearly to 60 nodes. We also evaluated a version of PageRank written using our implementation of Pregel over Spark, which we describe in Section 7.1. The iteration times were similar to the ones in Figure 10, but longer by about 4 seconds because Pregel runs an extra operation on each iteration to let the vertices “vote” whether to finish the job. 6.3 Fault Recovery We evaluated the cost of reconstructing RDD partitions using lineage after a node failure in the k-means application. Figure 11 compares the running times for 10 iterations of k-means on a 75-node cluster in normal operating scenario, with one where a node fails at the start of the 6th iteration. Without any failure, each iteration consisted of 400 tasks working on 100 GB of data. Until the end of the 5th iteration, the iteration times were about 58 seconds. In the 6th iteration, one of the machines was killed, resulting in the loss of the tasks running on that machine and the RDD partitions stored there. Spark re-ran these tasks in parallel on other machines, where they re-read corresponding input data and reconstructed RDDs via lineage, which increased the iteration time to 80s. Once the lost RDD partitions were reconstructed, the iteration time went back down to 58s. Note that with a checkpoint-based fault recovery mechanism, recovery would likely require rerunning at least several iterations, depending on the frequency of checkpoints. Furthermore, the system would need to replicate the application’s 100 GB working set (the text input data converted into binary) across the network, and would either consume twice the memory of Spark to replicate it in RAM, or would have to wait to write 100 GB to disk. In contrast, the lineage graphs for the RDDs in our examples were all less than 10 KB in size. 6.4 Behavior with Insufficient Memory So far, we ensured that every machine in the cluster had enough memory to store all the RDDs across itera- tions. A natural question is how Spark runs if there is not enough memory to store a job’s data. In this experiment, we configured Spark not to use more than a certain percentage of memory to store RDDs on each machine. We present results for various amounts of storage space for logistic regression in Figure 12. We see that performance degrades gracefully with less space. 6.5 User Applications Built with Spark In-Memory Analytics Conviva Inc, a video distribution company, used Spark to accelerate a number of data analytics reports that previously ran over Hadoop. For example, one report ran as a series of Hive [1] queries that computed various statistics for a customer. These queries all worked on the same subset of the data (records matching a customer-provided filter), but performed aggregations (averages, percentiles, and COUNT DISTINCT) over different grouping fields, requiring separate MapReduce jobs. By implementing the queries in Spark and loading the subset of data shared across them once into an RDD, the company was able to speed up the report by 40x. A report on 200 GB of compressed data that took 20 hours on a Hadoop cluster now runs in 30 minutes using only two Spark machines. Furthermore, the Spark program only required 96 GB of RAM, because it only stored the rows and columns matching the customer’s filter in an RDD, not the whole decompressed file. Traffic Modeling Researchers in the Mobile Millennium project at Berkeley [18] parallelized a learning algorithm for inferring road traffic congestion from sporadic automobile GPS measurements. The source data were a 10,000 link road network for a metropolitan area, as well as 600,000 samples of point-to-point trip times for GPS-equipped automobiles (travel times for each path may include multiple road links). Using a traffic model, the system can estimate the time it takes to travel across individual road links. The researchers trained this model using an expectation maximization (EM) algorithm that repeats two map and reduceByKey steps iteratively. The application scales nearly linearly from 20 to 80 nodes with 4 cores each, as shown in Figure 13(a). Twitter Spam Classification The Monarch project at Berkeley [29] used Spark to identify link spam in Twitter messages. They implemented a logistic regression classifier on top of Spark similar to the example in Section 6.1, but they used a distributed reduceByKey to sum the gradient vectors in parallel. In Figure 13(b) we show the scaling results for training a classifier over a 50 GB subset of the data: 250,000 URLs and 10^7 features/dimensions related to the network and content properties of the pages at each URL. The scaling is not as close to linear due to a higher fixed communication cost per iteration. 6.6 Interactive Data Mining To demonstrate Spark’s ability to interactively query big datasets, we used it to analyze 1TB of Wikipedia page view logs (2 years of data). For this experiment, we used 100 m2.4xlarge EC2 instances with 8 cores and 68 GB of RAM each. We ran queries to find total views of (1) all pages, (2) pages with titles exactly matching a given word, and (3) pages with titles partially matching a word. Each query scanned the entire input data. Figure 14 shows the response times of the queries on the full dataset and half and one-tenth of the data. Even at 1 TB of data, queries on Spark took 5–7 seconds. This was more than an order of magnitude faster than working with on-disk data; for example, querying the 1 TB file from disk took 170s. This illustrates that RDDs make Spark a powerful tool for interactive data mining. 7 Discussion Although RDDs seem to offer a limited programming interface due to their immutable nature and coarse-grained transformations, we have found them suitable for a wide class of applications. In particular, RDDs can express a surprising number of cluster programming models that have so far been proposed as separate frameworks, allowing users to compose these models in one program (e.g., run a MapReduce operation to build a graph, then run Pregel on it) and share data between them. In this section, we discuss which programming models RDDs can express and why they are so widely applicable (§7.1). In addition, we discuss another benefit of the lineage information in RDDs that we are pursuing, which is to facilitate debugging across these models (§7.2). 7.1 Expressing Existing Programming Models RDDs can efficiently express a number of cluster programming models that have so far been proposed independently. By “efficiently,” we mean that not only can RDDs be used to produce the same output as programs written in these models, but that RDDs can also capture the optimizations that these frameworks perform, such as keeping specific data in memory, partitioning it to minimize communication, and recovering from failures efficiently. The models expressible using RDDs include: MapReduce: This model can be expressed using the flatMap and groupByKey operations in Spark, or reduceByKey if there is a combiner. DryadLINQ: The DryadLINQ system provides a wider range of operators than MapReduce over the more general Dryad runtime, but these are all bulk operators that correspond directly to RDD transformations available in Spark (map, groupByKey, join, etc). SQL: Like DryadLINQ expressions, SQL queries perform data-parallel operations on sets of records. Pregel: Google’s Pregel [22] is a specialized model for iterative graph applications that at first looks quite different from the set-oriented programming models in other systems. In Pregel, a program runs as a series of coordinated “supersteps.” On each superstep, each vertex in the graph runs a user function that can update state associated with the vertex, change the graph topology, and send messages to other vertices for use in the next superstep. This model can express many graph algorithms, including shortest paths, bipartite matching, and PageRank. The key observation that lets us implement this model with RDDs is that Pregel applies the same user function to all the vertices on each iteration. Thus, we can store the vertex states for each iteration in an RDD and perform a bulk transformation (flatMap) to apply this function and generate an RDD of messages. We can then join this RDD with the vertex states to perform the message exchange. Equally importantly, RDDs allow us to keep vertex states in memory like Pregel does, to minimize communication by controlling their partitioning, and to support partial recovery on failures. We have implemented Pregel as a 200-line library on top of Spark and refer the reader to [33] for more details. Iterative MapReduce: Several recently proposed systems, including HaLoop [7] and Twister [11], provide an iterative MapReduce model where the user gives the system a series of MapReduce jobs to loop. The systems keep data partitioned consistently across iterations, and Twister can also keep it in memory. Both optimizations are simple to express with RDDs, and we were able to implement HaLoop as a 200-line library using Spark. Batched Stream Processing: Researchers have recently proposed several incremental processing systems for applications that periodically update a result with new data [21, 15, 4]. For example, an application updating statistics about ad clicks every 15 minutes should be able to combine intermediate state from the previous 15-minute window with data from new logs. These systems perform bulk operations similar to Dryad, but store application state in distributed filesystems. Placing the intermediate state in RDDs would speed up their processing. Explaining the Expressivity of RDDs Why are RDDs able to express these diverse programming models? The reason is that the restrictions on RDDs have little impact in many parallel applications. In particular, although RDDs can only be created through bulk transformations, many parallel programs naturally apply the same operation to many records, making them easy to express. Similarly, the immutability of RDDs is not an obstacle because one can create multiple RDDs to represent versions of the same dataset. Indeed, many of today’s MapReduce applications run over filesystems that do not allow updates to files, such as HDFS. One final question is why previous frameworks have not offered the same level of generality. We believe that this is because these systems explored specific problems that MapReduce and Dryad do not handle well, such as iteration, without observing that the common cause of these problems was a lack of data sharing abstractions. 7.2 Leveraging RDDs for Debugging While we initially designed RDDs to be deterministically recomputable for fault tolerance, this property also facilitates debugging. In particular, by logging the lineage of RDDs created during a job, one can (1) reconstruct these RDDs later and let the user query them interactively and (2) re-run any task from the job in a single-process debugger, by recomputing the RDD partitions it depends on. Unlike traditional replay debuggers for general dis- 8 Related Work Cluster Programming Models: Related work in cluster programming models falls into several classes. First, data flow models such as MapReduce [10], Dryad [19] and Ciel [23] support a rich set of operators for processing data but share it through stable storage systems. RDDs represent a more efficient data sharing abstraction than stable storage because they avoid the cost of data replication, I/O and serialization. Second, several high-level programming interfaces for data flow systems, including DryadLINQ [31] and FlumeJava [8], provide language-integrated APIs where the user manipulates “parallel collections” through operators like map and join. However, in these systems, the parallel collections represent either files on disk or ephemeral datasets used to express a query plan. Although the systems will pipeline data across operators in the same query (e.g., a map followed by another map), they cannot share data efficiently across queries. We based Spark’s API on the parallel collection model due to its convenience, and do not claim novelty for the language-integrated interface, but by providing RDDs as the storage abstraction behind this interface, we allow it to support a far broader class of applications. A third class of systems provide high-level interfaces for specific classes of applications requiring data sharing. For example, Pregel [22] supports iterative graph applications, while Twister [11] and HaLoop [7] are iterative MapReduce runtimes. However, these frameworks perform data sharing implicitly for the pattern of computation they support, and do not provide a general abstraction that the user can employ to share data of her choice among operations of her choice. For example, a user cannot use Pregel or Twister to load a dataset into memory and then decide what query to run on it. RDDs provide a distributed storage abstraction explicitly and can thus support applications that these specialized systems do not capture, such as interactive data mining. Finally, some systems expose shared mutable state to allow the user to perform in-memory computation. For example, Piccolo [27] lets users run parallel functions that read and update cells in a distributed hash table. Distributed shared memory (DSM) systems [24] and key-value stores like RAMCloud [25] offer a similar model. RDDs differ from these systems in two ways. First, RDDs provide a higher-level programming interface based on operators such as map, sort and join, whereas the interface in Piccolo and DSM is just reads and updates to table cells. Second, Piccolo and DSM systems implement recovery through checkpoints and rollback, which is more expensive than the lineage-based strategy of RDDs in many applications. Finally, as discussed in Section 2.3, RDDs also provide other advantages over DSM, such as straggler mitigation. Caching Systems: Nectar [12] can reuse intermediate results across DryadLINQ jobs by identifying common subexpressions with program analysis [16]. This capability would be compelling to add to an RDD-based system. However, Nectar does not provide in-memory caching (it places the data in a distributed file system), nor does it let users explicitly control which datasets to persist and how to partition them. Ciel [23] and FlumeJava [8] can likewise cache task results but do not provide in-memory caching or explicit control over which data is cached. Ananthanarayan et al. have proposed adding an in-memory cache to distributed file systems to exploit the temporal and spatial locality of data access [3]. While this solution provides faster access to data that is already in the file system, it is not as efficient a means of sharing intermediate results within an application as RDDs, because it would still require applications to write these results to the file system between stages. Lineage: Capturing lineage or provenance information for data has long been a research topic in scientific computing and databases, for applications such as explaining results, allowing them to be reproduced by others, and recomputing data if a bug is found in a workflow or if a dataset is lost. We refer the reader to [5] and [9] for surveys of this work. RDDs provide a parallel programming model where fine-grained lineage is inexpensive to capture, so that it can be used for failure recovery. Our lineage-based recovery mechanism is also similar to the recovery mechanism used within a computation (job) in MapReduce and Dryad, which track dependencies among a DAG of tasks. However, in these systems, the lineage information is lost after a job ends, requiring the use of a replicated storage system to share data across computations. In contrast, RDDs apply lineage to persist in-memory data efficiently across computations, without the cost of replication and disk I/O. Relational Databases: RDDs are conceptually similar to views in a database, and persistent RDDs resemble materialized views [28]. However, like DSM systems, databases typically allow fine-grained read-write access to all records, requiring logging of operations and data for fault tolerance and additional overhead to maintain consistency. These overheads are not required with the coarse-grained transformation model of RDDs. 9 Conclusion We have presented resilient distributed datasets (RDDs), an efficient, general-purpose and fault-tolerant abstraction for sharing data in cluster applications. RDDs can express a wide range of parallel applications, including many specialized programming models that have been proposed for iterative computation, and new applications that these models do not capture. Unlike existing storage abstractions for clusters, which require data replication for fault tolerance, RDDs offer an API based on coarse-grained transformations that lets them recover data efficiently using lineage. We have implemented RDDs in a system called Spark that outperforms Hadoop by up to 20\times in iterative applications and can be used interactively to query hundreds of gigabytes of data. We have open sourced Spark at spark-project.org as a vehicle for scalable data analysis and systems research. Acknowledgements We thank the first Spark users, including Tim Hunter, Lester Mackey, Dilip Joseph, and Jibin Zhan, for trying out our system in their real applications, providing many good suggestions, and identifying a few research challenges along the way. We also thank our shepherd, Ed Nightingale, and our reviewers for their feedback. This research was supported in part by Berkeley AMP Lab sponsors Google, SAP, Amazon Web Services, Cloudera, Huawei, IBM, Intel, Microsoft, NEC, NetApp and VMWare, by DARPA (contract #FA8650-11-C-7136), by a Google PhD Fellowship, and by the Natural Sciences and Engineering Research Council of Canada. References
{"Source-Url": "https://typeset.io/pdf/resilient-distributed-datasets-a-fault-tolerant-abstraction-1jmy0nuhco.pdf", "len_cl100k_base": 12674, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 51382, "total-output-tokens": 15199, "length": "2e13", "weborganizer": {"__label__adult": 0.0003421306610107422, "__label__art_design": 0.000331878662109375, "__label__crime_law": 0.0003528594970703125, "__label__education_jobs": 0.000858306884765625, "__label__entertainment": 0.00012481212615966797, "__label__fashion_beauty": 0.00017178058624267578, "__label__finance_business": 0.0003886222839355469, "__label__food_dining": 0.0003330707550048828, "__label__games": 0.0007581710815429688, "__label__hardware": 0.0018768310546875, "__label__health": 0.0005550384521484375, "__label__history": 0.0003752708435058594, "__label__home_hobbies": 0.00011807680130004884, "__label__industrial": 0.0005817413330078125, "__label__literature": 0.00029158592224121094, "__label__politics": 0.0003237724304199219, "__label__religion": 0.0005464553833007812, "__label__science_tech": 0.16845703125, "__label__social_life": 0.00011980533599853516, "__label__software": 0.02178955078125, "__label__software_dev": 0.80029296875, "__label__sports_fitness": 0.00027680397033691406, "__label__transportation": 0.0006198883056640625, "__label__travel": 0.00021970272064208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64897, 0.01959]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64897, 0.45335]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64897, 0.89805]], "google_gemma-3-12b-it_contains_pii": [[0, 5878, false], [5878, 11267, null], [11267, 16109, null], [16109, 19276, null], [19276, 21761, null], [21761, 27710, null], [27710, 31353, null], [31353, 35967, null], [35967, 40169, null], [40169, 44019, null], [44019, 47641, null], [47641, 53117, null], [53117, 58278, null], [58278, 64897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5878, true], [5878, 11267, null], [11267, 16109, null], [16109, 19276, null], [19276, 21761, null], [21761, 27710, null], [27710, 31353, null], [31353, 35967, null], [35967, 40169, null], [40169, 44019, null], [44019, 47641, null], [47641, 53117, null], [53117, 58278, null], [58278, 64897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64897, null]], "pdf_page_numbers": [[0, 5878, 1], [5878, 11267, 2], [11267, 16109, 3], [16109, 19276, 4], [19276, 21761, 5], [21761, 27710, 6], [27710, 31353, 7], [31353, 35967, 8], [35967, 40169, 9], [40169, 44019, 10], [44019, 47641, 11], [47641, 53117, 12], [53117, 58278, 13], [58278, 64897, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64897, 0.05797]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
cb3d9e43def5a68b95a9486c6425fe2475e827a8
Homura and Net-Homura: The Creation and Web-based Deployment of Cross-Platform 3D Games Chris Carter, Abdennour El Rhalibi, Madjid Merabti School of Computing & Mathematical Sciences Liverpool John Moores University Liverpool, UK {C.J.Carter@2007.} {A.Elrhalibi@}{M.Merabti@} ljm.ac.uk Dr Marc Price BBC Research and Development. BBC Research Tadworth, UK Marc.Price@bbc.co.uk Abstract— Digital distribution is becoming an increasingly important method within the games industry. The leading consoles each possess their own bespoke platform to digitally deploy games applications to their users via the Internet, whilst the Windows PC gaming market is catered for by systems such as Valve's Steam platform. However, these digital content services are often machine-specific, proprietary, utilising custom web frameworks and a rigid publication system. In this paper, we present Homura and Net-Homura; two interconnected frameworks which facilitate the development and deployment of cross-platform, hardware-accelerated 3D games applications using standard web browsers and web technologies, using a combination of Java and PHP. Keywords: Java, Java Plugin, Homura, Net-Homura, jME, Java Web Start, Applets, Deployment, Digital Distribution, Game Engine, Web Games. I. INTRODUCTION The introduction of the “next-generation” console systems has seen an increasing focus on the digital distribution of software. Each console has its own platform-specific digital content distribution mechanism – Nintendo Wii has the Wii Shop Channel [1], the Sony PS3 has the Playstation Store [2] and Microsoft has the Xbox 360 Live Marketplace [3]. In each case, store front-end applications are embedded into their console’s Operating System as bespoke platform-dependent applications and utilise the internet connectivity which each machine possesses to allow the users to easily access the online distribution stores. Each platform provides similar functionality: A browsable catalogue of downloadable content; A mechanism to download and install the content locally onto the games console; and a variety of content types including full games, retro game emulations, game add-ons, game demos etc. The deployment of games software is not limited to the console platforms and has become a growing market for PC and Mobile games. Valve’s Steam distribution platform [4] supplies over 600 PC titles and has over 20 million registered account holders, whilst Apple’s AppStore [5] system for the iPhone has gained the support and the release of titles from major developers and publishers such as EA (Sims 3), PopCap Games (Peggle). However, a major problem is that each of these digital distribution systems is platform-specific and proprietary. They are all also reliant on bespoke client applications (e.g. iTunes, the Steam client). Therefore, this paper presents an open-source platform for the development and deployment (digital distribution) of modern games applications, which can be both distributed and executed in a consistent cross-platform, cross-browser manner: the Homura project [6]. Section 2 of this paper provides a discussion of the work related to the production of the Homura framework. Section 3 provides a detailed technical overview of the two main constituents of the Homura project and the prototypes developed to test the concepts and interoperability of both systems. Section 4 analyses the proposed deployment solution. Finally, in Section 5 we conclude the paper and discuss future work. II. RELATED WORK In order to design the Homura framework, a detailed look into three related aspects was required: Existing engines and frameworks related to development and deployment of games via the Internet. We also appraised various programming languages and technologies to determine their suitability for both web-based and games development, resulting in the choice of Java as our development language; subsequently, in this section, we provide an overview of the deployment techniques available for Java. Finally, we needed to determine features which are necessary for an open games development platform. A. Existing Technologies There are two primary frameworks which support both the development and digital deployment of games applications: Unity [7] and Microsoft’s XNA Game Studio.[8] Both of these are proprietary solutions with closed source APIs. Unity supports both Windows and Mac OS X, through its custom browser plug-in, the Unity Web Player. It requires a different plug-in for each browser supported (such as the ActiveX control for Windows Internet Explorer). Unity applications are primarily developed graphically using its custom development environment and scripted using Mono (an open source .NET implementation), which supports C#, Boo and JavaScript as the development languages. Unity has many different license models from independent to professional level. XNA is a games development framework which provides a managed run-time environment for the development of computer games using the .NET 2.0 frameworks. XNA is a games development framework which provides a managed run-time environment for the development of computer games using the .NET 2.0 frameworks. XNA is mainly used with C# as the development language, and is available as integration for the Visual Studio development environment (professional and express editions). XNA supports development for both Windows PC and Xbox 360. There is currently no distribution platform for PC versions of XNA games, but games can be distributed on the Xbox 360 using the Microsoft community games portal of the Xbox Live Marketplace. To distribute games on this platform a peer review system must be passed and a yearly development subscription of $99 is required, with developers receiving 70% of the revenue of their creations [8]. B. Java-Based Deployment The Java language, its execution environment, and the suite of core APIs and classes together provide facilities that recommend it for use in implementing systems distribution. Its ability for the same code to be run on different platforms, its safety features and its support, through the class loader system, for dynamically incorporating new code makes it particularly suitable for systems whose behaviour and configuration are expected to change over time. Nevertheless, until recently there was little exploitation of these qualities beyond the relatively trivial use of “applets” to spice up web page content. This is due partly to the relatively short time since the language was introduced in a stable form and partly to the lack of infrastructure components to support more complex applications distribution. This deficiency is being addressed by technology vendors who are developing architectures for systems distribution and deployment, based on or incorporating Java, and providing components within those architectures. Since update 10 of Java Standard Environment (SE) 6, there are two mechanisms for the deployment of Java applications: Java Web Start (JWS) [9] and next-generation applets; both are components of the new Java Plugin 2, which is distributed as a part of the Java Runtime Environment (JRE). The Java Plugin is freely available for all major operating systems and browser environments, making the technology ubiquitous amongst desktop PC users. Next generation applets are a major upgrade to the original Java applet technology and have been modified to have architectural similarities to JWS. Applets are now executed outside the browser as a separate process which is controlled by a lightweight, headless virtual machine (JVM) which sits inside the browser [10]. Both the Applet and Web Start technologies utilise the Java Network Launching Protocol (JNLP) [11] to configure exactly how an application is deployed from a server location to the clients’ machines. The protocol uses a simple, standardised XML schema to define several key aspects of the deployment process, such as skinning the deployment transfer window present to the client, the libraries which comprise the application, Operating System and architecture specific libraries, access and security permissions. There are sections within the schema which also define properties specific to either Web Start or Applets, such as the Web Start application entry point and the Applet class to invoke. Java Plugin 2 applications can be easily integrated into HTML pages as link to the JNLP description file (in the case of JWS) or as an <applet> tag with a reference to JNLP file. When the JNLP files are executed by a supported browser the JRE Plugin is invoked and subsequently handles the execution of the application. As a result, Plugin 2 based Java applications exhibit excellent cross platform, cross-browser support. The JNLP protocol also allows for greater freedom in JVM choice, offering the ability to provide automatic upgrades to the user’s Java Run-time Environment (JRE), whilst supporting side-by-side installation of multiple JVM versions. With the considerable improvements made by Java Plugin 2 to applets, Homura has been designed to support distribution using both next-generation applets and Web Starts. Java is already being used as a web distribution platform for simple 2D game applications via the browser, such as EA’s casual games platform, Pogo [12]. C. Design Criteria and Rationale There are several key issues that need to be addressed, and desirable features which need to be implemented when undertaking the development of an open, web-based deployment platform for games applications: - **Security:** The applications must be able to be delivered in a secure manner, allowing the integration of authentication and authorisation mechanisms, as well as validation method to ensure the authenticity of the application to the user. - **Integration with existing web technologies:** In order to maximise the accessibility of the platform, it should easily allow the integration of the games application with a variety of common web application frameworks. - **Cross Platform Consistency:** To maximise the user base and minimise the development work required to execute / port the games across multiple hardware configurations and operating systems. - **Cross Browser Consistency:** The games should be interoperable with the most common browsers available on a given platform (IE 6/7/8, Firefox 3, Google Chrome, Safari etc) in a standards compliant, consistent manner, which does not require hacks in order to correctly support each program. - **Application Updates:** An easy mechanism for updating the versions of the application must be made, so that consistency amongst clients can be ensured and patches can be made to eradicate bugs etc. - **Download Size:** The download size of the application must be as small as possible to minimise the bandwidth usage and perform adequately on slower connections. Support for techniques such as compression and caching should be provided. - **Application Performance:** The games should be able to make use of the processing power and hardware capabilities that a modern desktop PC possesses. Utilisation of hardware acceleration and modern features such as programmable pipeline rendering should be handled by the framework. - **Open-Source:** The platform should be open in order to support a community dedicated to improving both the games engine and deployment platform. III. System Overview The Homura project [6] is comprised of three distinct sub-projects, designed to interoperate with each other to provide a consistent platform for the development of web-deployable games applications: Homura, an application framework for the development of hardware-accelerated 3D games using Java and OpenGL. Homura provides a vast array of functions and solutions to common game development techniques, detailed in section 3A; Net Homura, a web-based Framework for the development of websites / web application, building pages in a scene-graph like fashion using Object-Oriented PHP / HTML / CSS / JavaScript. It easily allows the integration of Homura games into web pages, detailed in section 3B; Homura IDE, the Homura IDE project, detailed in [13], aims to provide a game-oriented development environment built on top of the Eclipse IDE for the creation of Homura games. The IDE features a full Java programming editor framework, and a combination of visual editors to graphically construct and position objects within the game. The IDE project is outside the scope of this paper, and will not be covered in further detail. A. Homura – The Games Framework The Homura framework is an Application Programming Interface (API) which aims to provide an open-source platform to make it easy to develop hardware accelerated 3D games in Java. This section describes the application architecture, core feature set and key information regarding the implementation of the core classes which comprise the API. 1) Application Architecture Modern game applications are becoming increasingly complex and are typically comprised of several interoperable sub-systems, each handling an aspect of the game such as two-dimensional and three-dimensional rendering, physics simulation, particle effect systems, audio, input-device control, Artificial Intelligence etc. The Homura games framework utilises many open-source libraries to build a powerful Java-based API to allow developers to easily construct their game applications by unifying these sub-systems into a single library. Figure 1 illustrates the typical architecture of a game application built using Homura. ![Figure 1: Homura Application Architecture](image) The bottom layer of the architectural stack is the System layer. Homura is a cross-platform framework and will run on Windows, Linux, Mac OS X, but requires an OpenGL compatible graphics card. The second layer is the Native Layer. Homura is written in Java but utilises native platform-specific libraries for the key sub-systems as this provides the best combination of performance and feature support, by allowing hardware accelerated rendering and audio to be utilised. Homura relies on the native versions of OpenGL for rendering support, Open Dynamic Engine (ODE) for Physics simulation, OpenAL for Audio support and OGG/Orbis for open source audio format support. Java interfaces with these libraries using the Java Native Interface (JNI). The Homura Framework comprises the topmost layer of the API and is programmed exclusively in Java. All libraries directly referenced by Homura are also Java based, with these libraries handling the calls to the Native libraries. This approach was chosen because these existing libraries are already established and have been optimised to handle the native calls in the most efficient way, whereas Homura is primarily concerned with the high-level architecture of a games application. Homura utilises the Java Monkey Engine (jME) to provide rendering and input handling functionality. Programmed entirely in Java, jME uses the LightWeight Java Games Library (LWJGL) as its low-level OpenGL-based rendering sub-system. The primary function of LWJGL is to act as a Java binding to OpenGL by mirroring the interface of the C OpenGL library with a Java version of each function. For example, OpenGL’s glBegin() is adapted as GL11.glBegin() in LWJGL. The LWJGL function will then utilise Java’s JNI system to call the native version of glBegin(), and uses Java’s NIO system to pass information between OpenGL and LWJGL as ByteBuffers. jME provides a high performance scene-graph based graphics API. The scene-graph allows the organization of 3D geometry into a tree-like structure, where a parent node can contain any number of children nodes, but a child node must contain only a single parent. The nodes are organized spatially so that whole branches of the graph can be culled. This allows for complex scenes to be rendered quickly, as typically, most of the scene is not visible at any one time. The scene-graph’s leaf nodes consist of the geometry that will be rendered to the display. jME is an open-source technology which, over the last five years, has matured into a feature rich system which is one of the most performant graphical implementations in Java for 3D applications. Homura also integrates jME’s 3D Audio support. The audio sub-system again relies on LWJGL to provide the native bridge to the OpenAL audio library, whilst using the open-source OGG/Orbis system as the media format for audio files. Homura also utilises a jME sub-project, jME Physics 2 to provide the Physics simulation functionality of the framework. jME Physics integrates tightly with the jME scene-graph by virtue of its Physics object classes inheriting from the jME scene-graph Node class. jME Physics uses the concept of Static and Dynamic node types, static nodes are nodes that are not affected by physics, but other objects still can react physically to them (e.g. a wall), Dynamic nodes can be affected by forces and mass such as gravity and collisions with other physics objects (e.g. modelling a bouncing ball colliding with the static wall). JNI is used to bridge jME Physics with ODE to provide the low-level physics functionality. The final library utilised by Homura is the integration of the Java Open Particle System (JOPS), a framework which allows the creation of advanced particle effects (Smoke plumes, explosions, fireworks etc.) designed for LWJGL. This has been integrated into Homura by incorporating the JOPS file type into the Homura asset management system and encapsulating the particle generators as a specialised scene-graph node called a JOPSNODE to allow them to be easily added into a scene, or attached to a game entity node (e.g. the exhaust of a car). The framework composites a large set of disparate components into a single system, allowing a game to be easily built on top of the Homura system through linkage with the project’s binary Java Archive (JAR) file. Consequently, the final architectural layer is the User-Creation layer, which comprises the developed game. A game inherits from the Homura base classes (as described in 3.1.4) to provide the skeleton game - complete with all the aforementioned sub-systems. These classes are then implemented with the required game logic and the user-developed content (Models, textures, particle effects, music, sound effects, backgrounds, etc) which are stored as a Homura asset collection and loaded within the classes using the Homura asset loader to construct the virtual environment which embodies the game. Whilst the core of a Homura-based game is developed in Java, non-performance critical sections of the game (e.g. some parts of the game logic) can be implemented as Scripts. Homura supports a variety of languages such as Scala, Jython, JRuby and JavaScript (or any JSR223 compatible Scripting engine), scripts can easily be written to control any portion of the scene-graph (the whole scene to a single node) and can be used for a variety of purposes such as AI, cinematics, animation control, event triggers etc. 2) Features of the Framework Homura provides an extensive set of features required of a modern games engine and utilises the benefits of Java such as its reflection system, platform-independence and large library of base packages. Some of the key features of Homura are: - **Platform and Application Agnostic** – Games developed using Homura will run as Applets, Java Web Starts or standalone Java applications and will run on Windows, Mac OS X and Linux distributions that meet the minimum system specifications without requiring code modifications. - **Platform Introspection** – Allows system-level properties to be queried to determine whether client can run a particular feature or perform a particular operation (GLSL Shader support, OpenGL 2/3 extension support, Anti-Aliasing and Anisotropic support, Nvidia/ATI extensions, memory usage, free space, OS information etc.) - **Integrated Run-Time Debugging System** – Comprehensive debugging system with real-time statistical reporting (frame rate, vertex and poly counts), visual aids (normals, tangents, bounding boxes, physics forces, physics bounds, wireframe node etc), run-time, reflection-based scene graph introspection allowing dynamic traversal and modification of the graph node, Reflection-based console system to execute scripts, run commands, alter Java objects etc. - **Multi-Format Asset Support** – Model loaders for key formats (COLLADA, 3DS, OBJ, MD5 etc) and Texture / Image loaders for main image formats (DDS,TGA,PNG,JPEG,GIF etc) with sprite-based and 3D animation support. - **Game State System** – State system for the implementation of game-screens (GUI, 2D / 3D Scenes, Pause Menus, Loading screens etc). Handles garbage collection and object de-allocation for efficient memory usage. - **Effects System** - GLSL Shader Support and Particle System support (JOFS/jME); integrated effect systems for water, texture-splatting, depth of field, bloom, cartoon-shading, Normal Mapping, Parallax Mapping etc. - **Physics and Collision System** – Physics system which handles Rigid-body dynamics, ragdolls and material-based interactions. Physics and non-physics based collision systems and support for a variety of bounding geometries (AABB/OBB etc). - **Game-Specific Optimisations and Common Techniques Library** – Fast Math approximations, Level of Detail, Lighting, Terrain Paging, 3D Audio support, Environment Mapping. - **Scripting Support** – Supports JSR-223 compatible scripting languages allowing control over user-defined entities or scene graph objects. 3) Application Partitioning The previous sections detailed the functionality provided by Homura and the architecture underpinning the framework. This section aims to provide an overview of how Homura can be used to create games applications which can be deployed in a cross-platform, cross-browser manner. In order to achieve this, a partitioning had to be made to separate the game from its underlying display context (browser, application window, full-screen mode) abstracting the display system. As a result of this partitioning, there are three roles that need to be fulfilled by any Homura-based game: Executor, Instantiator, and Controller. Figure 2 illustrates this partitioning. ![Figure 2: Game Partitioning](image) **Executor:** The role of the Executor is to define the game’s update/render loop and application flow, independent of the application type. The executor handles hardware and general Java exceptions, incorporates the logging system, polls the input devices and generates input events as an event queue. The Executor also defines an interface with key methods (initialise, pre-update, update, post-update, render, cleanup) which each game must implement. The Executor has a start() method which is used to start the game execution loop. **Instantiator:** The role of the Instantiator is to create and configure the Display System and Renderer, assemble Homura’s asset management system and concretely implement the Executor interface. The Instantiator creates an instance of the Controller and binds it to each of the key methods of the Executor’s interface. Each application type has a concrete version of an Instantiator which creates the application-specific graphics context (e.g. HomuraBaseApplet creates an AWT canvas to embed inside a webpage, whilst HomuraBaseGame creates either a windowed or full-screen application). **Controller:** The Controller is the backbone of the application. It provides the access point to the Display System, irrespective of application type, and provides contextual information regarding the current execution environment (graphics card capability, memory usage, OS version, screen resolution, colour depth, anti-aliasing etc) and provides core helper operations for the rendering system (create a camera, change the view frustum, create Rays, convert between world and screen co-ordinates etc.). All game components utilise the controller in order to access the renderer and viewport, meaning the game is abstracted from its rendering target. The base controller HomuraBaseStateManager allows developers to build their game state system, or utilise Homura’s own stack-based state system to separate the game, and control transitions between game modes. 4) Game State System Homura’s game state system [14] allows the game to be logically organised into individual game screens, based on their functionality or purpose within the game system. This allows transitions between different sections of the game (e.g. GUI to Loading to Level 1) to be handled in an easy manner, and partitions the game into easier to handle sub- Homura provides an abstract base class, *HomuraBaseGameState*, which all game states inherit from. This provides the developer with a blank state to build on top of. Each game state maintains three separate scene graphs - one for 3D objects (the Root Node), one for 2D objects (the Orthographic Node) and one for transitional effects (the Fade Node). This allows the developer to easily overlay 2D graphics on top of a 3D scene (e.g. HUD elements). This separation also limits the number of state changes OpenGL has to perform, switching between perspective and orthographic projection, during the rendering phase. The base class provides a core set of pre-initialised objects: a camera viewpoint from which the 3D scene is rendered from, a base lighting system to illuminate the scene and a z-buffer to sort the 3D objects from the viewpoint. The base class also provides several abstract methods to implement, each correspond to a key part of the game loop: `initialise()`, where assets should be loaded, objects set their base state, and the initial scenegraph constructed; `backgroundupdate()` and `update()`, where game logic, AI, physics should be updated and any changes to the scenegraph should occur; `render()`, where any additional graphs to the default should be passed for rendering. The base class automatically updates and renders the aforementioned main scenegraphs. Game screens are developed through extension of this base class. Homura provides several pre-implemented sub-classes, designed for commonly required game screen functionality such as *HomuraPhysicsGameState*, which adds the Physics node support, with standard gravity and friction setup. *HomuraBaseMenuState* which adds an extensible 2D/3D Menu system and *HomuraDebugGameState* which adds runtime debugging support as mentioned in section 3A. This class can then be substituted with the base class when building the final product to remove debug support, making production builds a simple process. Game states are managed and controlled by a concrete implementation of the *HomuraBaseStateManager*. Game states utilise the manager to access the display system, get timing information and access the functionality provided by the Controller role. Homura provides *HomuraStackedStateManager*, which stores the game states in a stack. Figure 3 illustrates some state manager scenarios. ![Figure 3: Homura State Management System](image) Game states are added to a Homura game by pushing a new instance of a game state onto the stack, this also binds the state manager to the game. The manager’s update loop iterates over all the game states in the stack from bottom to top. All game states have their `backgroundupdate()` method called, but only the topmost state has its `update()` method called as it is in focus. The manager’s render loop also iterates over each of the game states in the same order as the update loop calling their `render()` method. This guarantees the order of rendering so that the 3D root node is rendered first, then the HUD node, then the fade node. This means that 3D objects placed in a state higher in the stack are drawn after the 2D object of its previous state, allowing layering. Timing conditions can be set on the game states to specify a fade in and fade out duration when they are pushed on / popped off the stack so that the game state can query its current transition state as a value between 0-1 (where 1 is on top and 0 is overlaid). This can be used to apply transition effects such as a colour fade, transparency fades, slides etc. This game state system allows some typical game tasks to be carried out with ease. An example to illustrate the reduction in complexity afforded to the developer is an in game pause menu system, as illustrated in Figure 3. In this scenario, an existing game state called *Level 1* has an event handler triggered (e.g. pressing the ‘p’ key) which has called a method called `pause()`. This method creates a new instance of the *PauseMenuGameState* and adds it to the state manager. The state manager pushes this onto the stack and initialises this game state’s scenegraph (comprised of menu items such as ‘resume’ or ‘exit game’). This pauses the game instantly as *Level 1* is no longer the topmost game state, which means its `update()` method is not being called, so no input events or scenegraph changes are being made to this game state. When the *PauseMenuGameState* is terminated by the player (e.g. presses a button to resume the game) this game state is popped from the state manager and its `cleanup()` method called to de-allocate unused objects. Subsequently, *Level 1* becomes the topmost state again and it’s `update()` method is called again, resuming play. This is all handled via the data structures and state system, requiring no coded logic in the game. The defined rendering order also allows easy visual effects to be applied to the pause system, such as adding a transparency to the *Level 1* state’s fade node so that the Pause Menu state’s text becomes more legible, when it is rendered on top of *Level 1*’s scene. ### B. NetHomura – The Web Framework The NetHomura framework is an Application Programming Interface (API) which aims to facilitate the creation of web-based distribution platform for Homura based games. This section describes the application architecture, core feature set and key information regarding the implementation of the core classes which comprise the API. In the future, Net Homura will also be further expanded to provide a networking middleware, to integrate MMOG support into the platform, as detailed in [15]. #### 1) Systems Architecture Net Homura provides a PHP-based web API for the development of websites and web applications. NetHomura uses Object-Oriented PHP5 and is structured so that it will integrate into the common web application stack, which features an OS, web server, relational database (RDBMS) and server-side application language interface. Typical configurations supported by Net Homura are LAMP (Linux, Apache, MySQL, and PHP), WAMP (Windows, Apache, MySQL, and PHP) and WIMP (Windows, IIS, MySQL, and PHP). Figure 4 below, illustrates the typical architecture of a NetHomura based web application. NetHomura is platform-agnostic, running on any Operating System which can run a PHP 5-enabled web server such as Apache 2 or Internet Information Services (IIS) Server. NetHomura also supports both open-source and proprietary database technologies such as MySQL, Postgres and SQL Server, as long as they have a supported PHP driver. The NetHomura PHP library can be included either server-wide or application-specific within the root of the web application directory / virtual domain. - **Database Integration**: Abstracts the connection so that all access and usage with NetHomura is done without requiring specific knowledge of the RDBMS used. - **Client Side / Server Side Validation**: JavaScript and PHP functions to validate user input at the browser and the server. - **Base Data Type Support Functions**: These functions provide commonly needed routines for each of the main data types, such as Array Sorting, HTML to string conversion, password generation, hashing functions etc. - **XHTML Page-Template class**: Encapsulates an HTML/XHTML document as an Object-Oriented construct, allowing the programmatic definition of the presentation layer (CSS, JavaScript, Meta Tags etc.). This class is abstract and is designed to be utilised as the basis to construct a site template through class-inheritance. - **Tag Generation Functions**: These functions provide single line helper methods for producing common HTML tags and form data as PHP Strings. - **Browser/User-Agent Detection**: These scripts allow the developer to query information such as the browser used, operating system etc. - **XML Driven JNLP Support**: System to drive the dynamic creation of a JNLP definition file, which controls how Java downloads your application, as detailed in the next section. 3) **Deployment Overview** Net Homura deploys the Java-based Homura games to the client machine in both Java Web Start and Applet format. Net Homura provides a method to dynamically create JNLP configuration files and install games on the Client computer, as shown in Figure 5. ![Figure 5: Deployment Overview](image) Deployment uses Net Homura’s JNLPLoader class. An Instance is constructed which takes two files as its parameters. The first is an XML template which contains the skeleton of a JNLP file. Customisable options are defined within this template as template fields. Typical items which are configured are skinning information, security options, Java Virtual Machine versions and the Java JAR files which comprise the Homura game application. There are four types of JAR file game, library, optional and native: Game JARs contain the binary version of the main game application. The Library JAR files contain the Homura Asset Collection, Homura Framework and its library dependencies (see Figure 1). Optional Libraries are JAR files for additional frameworks used by some of the games and the Native JARs contain the platform-specific native libraries. The Template automatically includes all the Library JARs and defines the Native JAR files based on the platform. The second file used is a game-specific parameters file containing key-value pairs which specify the value of the template fields to replace. This file must specify the entry point to the Homura game of either the Web Start, Applet or both. The JNLPLoader class then parses both these files and performs template replacement to construct the JNLP file for supported application types. To create a Web Start, \texttt{renderJNLP()} must be invoked, sending the in-memory JNLP file as an HTTP Response stream, which is interpreted by the Client’s Java Plugin, which downloads and installs the game. To create an Applet, \texttt{renderApplet()} is invoked. This writes the in-memory JNLP to a temporary file and generates a HTML Applet tag, which references this temporary JNLP. This can then be embedded inside an HTML page and returned to the client and the Java Plugin interprets this in the same way as the Web Start. The separation of game and libraries can drastically reduce bandwidth usage. On first usage, the player downloads the game and all the required Homura libraries, along with the natives for the Client platform. If the user downloads another game, only the main game JAR and the optional JARs need to be downloaded. Once downloaded onto the Client computer, all JARs are cached, so that any subsequent executions of an already played game use the cache, so the game runs instantly. Also, because the Applet and Web Start use the same game JAR, once a Web Start version has been downloads, the applet version is executed automatically from the cache, and vice-versa. C. Prototyping and Release During the course of developing the Homura and Net-Homura frameworks, we devised a series of six test games and three technical game demos which illustrate all aspects of the Homura framework. Net-Homura was used to construct a portal application [16], as described in [15]. This houses all the technical demos and successfully deploys the games applications in a cross-platform, cross-browser manner, whilst utilising all the features of the Net-Homura framework. Both the frameworks have been released under LGPL licenses and are available from the Official project site: \url{http://java.cms.livjm.ac.uk/homura/}, along with the release documentation. Live versions of all the demo applications are also available under the ‘Showcase’ section of the site, in both JWS and Applet format. A live sandbox version of the Homura Games web applications portal, built using Net-Homura, is available for public preview at: \url{http://java.cms.livjm.ac.uk/homuragames}. IV. SOLUTION ANALYSIS This section provides an analysis of the platform’s efficacy and suitability as a means for the deployment of hardware accelerated 3D games in a cross-platform, cross-browser manner. The prototype NetHomura-based portal application and the nine Homura-based showcase demos, published in a real-world server setting, provided an appropriate test-bed for benchmarking and evaluating the Homura solution. These sample applications also demonstrate the technical viability of using Web Starts and next-generation applets as a means for the deployment of advanced Java-based games applications. Figure 6 illustrates an example Homura application deployed directly from Internet Explorer in Applet format. A. Evaluation of Solution As well as satisfying the technical viability of this approach, the platform must demonstrate real-world applicability. The evaluation of is based on the criterion outlined in section 2C of this document. A more detailed appraisal of the system can be found in [17]: - **Security:** The deployment platform web application can be secured via standard web security techniques. Net-Homura deployment support HTTPS transfer. Homura games in both applet and web start must be signed with RSA digital certificate, in order for the Java security system to allow access to native libraries, which can be used to identify the authenticity of the distributed software. - **Integration with existing web technologies:** Net-Homura features programmatic integration support for any PHP-enabled web-server. Homura-based games can be embedded into any HTML-compatible framework using standard next-generation applet / web start methods.[10] The Net-Homura application framework is extremely compact with a distribution size of 484KB. - **Cross Platform Consistency:** Homura games are cross-platform compliant across the range of desktop PC platforms, supporting Windows 2000/XP/Vista (x86/x64), Linux (x86/x64) and Mac OS X (PPC/Intel-based). - **Cross Browser Consistency:** The Net-Homura platform and Homura-based games applications are interoperable with the following browsers in a standards compliant, consistent manner: <table> <thead> <tr> <th>Deployment Method</th> <th>IE6</th> <th>IE 7</th> <th>IE 8</th> <th>FIREFOX 2.0+</th> <th>FIREFOX 3.0+</th> <th>CHROME 1.0+</th> <th>SAFARI 3.0+</th> <th>OPERA 9.0+</th> </tr> </thead> <tbody> <tr> <td>Applet</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>x</td> <td>x</td> </tr> <tr> <td>Web Start</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> <td>✓</td> </tr> </tbody> </table> - **Download Size:** The Homura framework download size is compact, requiring 14.1MB for the entire Homura Framework, Homura Asset Collection and additional APIs. The JAR files are compressed using gzip and Homura also supports Pack 200 compression. - **Application Updates:** Both Web Start and Applets support application updating through the JNLP specification. This can be achieved declaratively using Net-Homura, or programmatically using Homura in combination with the JNLP API (\url{java/jnlp} package). - **Open-Source:** Both Homura and Net-Homura have been released under an LGPL license. This license is less-prohibitive in that commercial or closed source titles can be built on top of the framework, but any additions, fixes, improvements made to the underlying platform must be committed back to the community, in order to help the platform progress. B. Homura Application Performance Whilst the other aspects of the evaluation are important to the overall quality of the solution, the main criteria for assessment must be the performance of the Homura games, within their deployed context. The procedural island demo from the Homura technical demos was chosen due to its complexity. The scene is comprised of a terrain dynamically generated from a height map and multi-textured with four texture layers. The island is then surrounded by a GLSL shader implemented water effect system. The entire scene comprises 262172 polygons. The games application was run with the browser as the only active application context. The benchmarks were averaged over ten executions, with the frame-rate averaged over a two minute execution time, after initialisation. The Homura applet implementation is currently capped to update at 60 frames a second (a limitation currently imposed by LWJGL), but due to the architecture of the application, the timer object could be tested independent of this restriction. The tests utilised Homura’s in-built logging system, to ensure that the tests were as unobtrusive as possible. Table I details the results of the benchmarking. The tests were performed on two separate triple-booting machines. <table> <thead> <tr> <th>HARDWARE CONFIGURATION</th> <th>DISPLAY MODE</th> <th>APPLET FPS</th> <th>JWS FPS</th> </tr> </thead> <tbody> <tr> <td>1: Windows XP SP3</td> <td>800x600 Windowed 9xAA 24bit</td> <td>124</td> <td>118</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>78</td> <td>89</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>105</td> </tr> <tr> <td>2: Windows Vista SP2</td> <td>800x600 Windowed 9xAA 24bit</td> <td>124</td> <td>118</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>79</td> <td>92</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>106</td> </tr> <tr> <td>1: Ubuntu 9.04 with vendor's Binary graphics driver support</td> <td>800x600 Windowed 9xAA 24bit</td> <td>128</td> <td>140</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>83</td> <td>92</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>99</td> </tr> <tr> <td>2: Windows XP SP3</td> <td>800x600 Windowed 9xAA 24bit</td> <td>104</td> <td>116</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>65</td> <td>75</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>95</td> </tr> <tr> <td>2: Windows Vista SP2</td> <td>800x600 Windowed 9xAA 24bit</td> <td>104</td> <td>116</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>64</td> <td>77</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>95</td> </tr> <tr> <td>1: Ubuntu 9.04 with vendor's Binary graphics driver support</td> <td>800x600 Windowed 9xAA 24bit</td> <td>105</td> <td>118</td> </tr> <tr> <td></td> <td>1024x768 Windowed 2xAA 24bit</td> <td>67</td> <td>79</td> </tr> <tr> <td></td> <td>1024x768 Fullscreen 4xAA 32bit</td> <td>N/A</td> <td>90</td> </tr> </tbody> </table> From these results, it is clear that performance suffers slightly when comparing Applets to Web Starts, with a performance penalty of around 8-10% expected. The difference in performance between Windows versions was negligible, with the Linux implementation slightly faster in windowed mode, but slightly slower in full-screen. As expected, increased resolution and anti-aliasing resulted in performance degradation, whilst full screen performance was markedly better than achieved by their windowed mode counterparts. The performance in each case also never dropped below 60FPS, the current performance standard expected from modern games applications. V. CONCLUSION In this paper we have presented an overview of the Homura Engine and Net-Homura deployment middleware. Homura provides an integrated solution for Java based 3D games development. We have also discussed a novel architecture and prototype system, which aims to unify the deployment of hardware-accelerated Java-based 3D games applications with online capabilities. Net-Homura provides a multi-tiered deployment platform that is secure, robust, and easily portable to a wide range of web servers. The networking middleware provided allows developers to build content and feature rich online-games in conjunction with the Homura Engine and IDE. Due to nature of the technologies used within the Net-Homura framework, and combined with the Homura engine and IDE, it is possible to enable the creation of a game which, from development through to hosting, deployment and networking, can be created with little or no financial outlay for the developer. This would enable small-scale developers to distribute modern games applications to users worldwide. There are several enhancements and future directions that can be taken in the future development phases in order to fully realize the potential of the proposed solution. Currently, the content of the system is directly added to the database via a set of scripts and stored procedures. The deployment system can be further developed to provide a set of back-end tools which allow the management of games, users, and application configuration through a web interface. This should be a relatively straightforward implementation, using the existing data-access and application tiers. The middleware component requires the completion of the test games in order to evaluate its performance and scalability properly. The first phase is to support between 8-16 concurrent players during a game session. The next phase will involve the incorporation of networking algorithms such as dead-reckoning, Area of Interest Management, cheat-detection to increase the scalability of the system by an order of magnitude. VI. REFERENCES
{"Source-Url": "http://www.researchgate.net/profile/Madjid_Merabti/publication/224085889_Homura_and_Net-Homura_The_creation_and_web-based_deployment_of_cross-platform_3D_games/links/00b7d51858a8441b8b000000.pdf", "len_cl100k_base": 9717, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 27162, "total-output-tokens": 10376, "length": "2e13", "weborganizer": {"__label__adult": 0.0011224746704101562, "__label__art_design": 0.00048732757568359375, "__label__crime_law": 0.0008144378662109375, "__label__education_jobs": 0.0008559226989746094, "__label__entertainment": 0.00021755695343017575, "__label__fashion_beauty": 0.0004944801330566406, "__label__finance_business": 0.0004191398620605469, "__label__food_dining": 0.0008044242858886719, "__label__games": 0.0295867919921875, "__label__hardware": 0.002094268798828125, "__label__health": 0.0006504058837890625, "__label__history": 0.0004668235778808594, "__label__home_hobbies": 0.00011652708053588869, "__label__industrial": 0.0007109642028808594, "__label__literature": 0.0004091262817382813, "__label__politics": 0.00040650367736816406, "__label__religion": 0.0009937286376953125, "__label__science_tech": 0.00925445556640625, "__label__social_life": 0.00010508298873901369, "__label__software": 0.00719451904296875, "__label__software_dev": 0.9404296875, "__label__sports_fitness": 0.0008835792541503906, "__label__transportation": 0.0008797645568847656, "__label__travel": 0.00043392181396484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47383, 0.03064]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47383, 0.20045]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47383, 0.90233]], "google_gemma-3-12b-it_contains_pii": [[0, 5380, false], [5380, 12212, null], [12212, 18138, null], [18138, 24720, null], [24720, 30708, null], [30708, 34386, null], [34386, 40531, null], [40531, 47383, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5380, true], [5380, 12212, null], [12212, 18138, null], [18138, 24720, null], [24720, 30708, null], [30708, 34386, null], [34386, 40531, null], [40531, 47383, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47383, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47383, null]], "pdf_page_numbers": [[0, 5380, 1], [5380, 12212, 2], [12212, 18138, 3], [18138, 24720, 4], [24720, 30708, 5], [30708, 34386, 6], [34386, 40531, 7], [40531, 47383, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47383, 0.17143]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
811a8094f5593ebaa5f4f9acfc7c8150f08d0bf7
Detecting Asymmetric Application-layer Denial-of-Service Attacks In-Flight with FINELAME Henri Maxime Demoulin Isaac Pedisich Nikos Vasilakis Vincent Liu Boon Thau Loo Linh Thi Xuan Phan University of Pennsylvania Abstract Denial of service (DoS) attacks increasingly exploit algorithmic, semantic, or implementation characteristics dormant in victim applications, often with minimal attacker resources. Practical and efficient detection of these asymmetric DoS attacks requires us to (i) catch offending requests in-flight, before they consume a critical amount of resources, (ii) remain agnostic to the application internals, such as the programming language or runtime system, and (iii) introduce low overhead in terms of both performance and programmer effort. This paper introduces FINELAME, a language-independent framework for detecting asymmetric DoS attacks. FINELAME leverages operating system visibility across the entire software stack to instrument key resource allocation and negotiation points. It leverages recent advances in the Linux extended Berkeley Packet Filter virtual machine to attach application-level interposition probes to key request processing functions, and lightweight resource monitors—user/kernel-level probes—to key resource allocation functions. The data collected is used to train a model of resource utilization that occurs throughout the lifetime of individual requests. The model parameters are then shared with the resource monitors, which use them to catch offending requests in-flight, inline with resource allocation. We demonstrate that FINELAME can be integrated with legacy applications with minimal effort, and that it is able to detect resource abuse attacks much earlier than their intended completion time while posing low performance overheads. 1 Introduction Denial-of-Service (DoS) attacks aim to hinder the availability of a service from its legitimate users. They work by overwhelming one or more of the resources of the service (e.g., CPU, network, memory, or disk), causing the service to become slow or, in the limit, entirely unavailable. Classic DoS attacks are simple in structure: attackers, in large-scale, brute-force volumetric attacks, send many requests that far exceed the service’s available resources. Although potentially crippling—sometimes reaching aggregate volumes of terabits per second [24, 43]—many effective mitigation techniques have been developed over the years, including commercial services like CloudFlare, Akamai, or the intrusion detection systems of Arbor Networks. In response to these defenses, recent attacks have become much more sophisticated in nature: rather than relying on the sheer volume, they take the form of highly specialized, application-specific asymmetric DoS (ADoS) attacks [11, 12, 36, 48]. These attacks contain carefully-crafted, pathological payloads that target algorithmic, semantic, or implementation characteristics of the application’s internals. They require significantly lower volumes of traffic and attacker resources to compromise resource availability. With the prevalence of third-part libraries, broad swaths of applications can be vulnerable to a given attack. For instance, the Regular-Expression DoS (ReDoS) attack [12, 13, 51] affects many programs that use regular expressions by leveraging algorithmic complexity to craft a single payload of a few characters that can occupy a service for several hours. Due to this increase in sophistication, existing defenses are becoming inadequate [10, 26–28, 31, 40, 54, 60–62]. Network-based defenses are generally ineffective against ADoS attacks because these attacks lack identifiable problematic patterns at the network level. To be successful, network tools would not only need to perform deep packet inspection, but would also need to be able to predict which requests will hog resources a priori—a challenge analogous to solving the halting problem. Similarly, existing application-level defenses are limited in their efficacy: since these attacks can target arbitrary resources and arbitrary components of the service, which may be written in different programming languages and contain multiple binary third-party packages whose source code is not available or with complex dependencies, manual instrumentation of the application is prohibitively difficult, expensive, and time-consuming. This paper presents the design and implementation of FINELAME (Fin-Lahm), a practical framework for detect- ing ADoS attacks. In FINELAME, users only need to annotate their own code to mark the start and end of request processing; in many cases, annotations are not even required as applications lend themselves naturally to this demarcation. Our interaction with the most recent Apache Web Server\(^1\) and Node.js\(^2\) versions, for example, involves tracing three and seven functions, respectively, and not a single modification in their source code. Based on the annotations, FINELAME automatically tracks CPU, memory, storage, and networking usage across the entire application (even during execution of third-party compiled binaries). It does so with low overhead and at an ultra-fine granularity, which allows us to detect divergent requests before they leave the system and while they are attempting to exhaust resources. Enabling our approach is a recent Linux feature called extended Berkeley Packet Filter (eBPF). eBPF enables the injection of verified pieces of code at designated points in the operating system (OS) and/or application, regardless of the specific programming language used. The OS is a natural, \textit{de facto} layer of resource arbitration, with extensive infrastructure and pluggable tools for fine-grained resource monitoring and distribution. By interposing on key OS services, such as the network stack, the scheduler, and user-level memory management facilities, FINELAME can detect abnormal behavior in a unified fashion across the entire software stack at run time. FINELAME consists of three synergistic components that operate at the user/kernel interface. The first component allows attaching application-level interposition probes to key functions responsible for processing requests. These probes are based on inputs from the application developers, and they are responsible for bridging the gap between application-layer semantics (e.g., HTTP requests) to its underlying operating system carrier (e.g., process IDs). Examples of locations where those probes are attached include event handlers in a thread pool. The second component attaches resource monitors to user or kernel-space data sources. Examples of such sources include the scheduler, TCP functions responsible for sending and receiving packets on a connection, and the memory manager used by the application. To perform anomaly detection, a third component deploys a semi-supervised learning model to construct a pattern of legitimate requests from the gathered data. The model is trained in the user space, and its parameters are shared with the resource monitors through-out the system, so that anomaly detection can be performed in-line with resource allocation. In summary, we make the following contributions: - A library of resource monitors and associated probes that can be used to detect asymmetric DoS attacks. - An eBPF-based implementation and evaluation of FINELAME on Linux. Our evaluation shows that FINELAME requires low additional instrumentation overhead, requiring between 4-11% additional overhead for instrumenting web applications ranging from Apache, Node.js, and DeDOS\[15\]. Moreover, when evaluated against real application-layer attacks such as REDOS\[5\], Billion Laughs\[3\], and SlowLoris\[46\], FINELAME is able to detect the presence of these attacks in near real-time with high accuracy, based on the attack deviation from normal behavior. The rest of the paper is structured as follows: it first motivates FINELAME’s goals by providing a brief overview of asymmetric DoS attacks (§2); it then lays out our threat model and assumptions (§3); it describes the FINELAME’s design and its three component parts, (i) request mapping (§4.1), (ii) resource monitoring (§4.2), and (iii) anomaly detection (§4.3); it next details several prototype implementations (§5) and evaluates the FINELAME prototypes’ intrusiveness, overheads and accuracy, using a combination of micro-benchmarks and real applications (§6); finally, it compares with prior work (§7) and concludes with a discussion of limitations and possible directions for future research (§8). 2 Motivation We begin by showing via an example server-side application the operation of an ADoS attack, the limitations of current detection mechanisms, and design goals for our system. 2.1 Background on ADoS Attacks Fundamentally, asymmetric DoS attacks are attacks that leverage application-specific behaviors to cause disproportionate harm to the system using comparatively low amount of attacker resources. They can target any layer of the stack and any resource within the system. ADoS vulnerabilities are widespread and often affect entire software ecosystems [41]. We detail a few of them below. **Regular-expression DoS (ReDoS)** [12, 13, 51]. ReDoS attacks target programs that use regular expressions. Attackers craft patterns that result in worst-case asymptotic behavior of a matching algorithm. An example pattern is \((a^+)\)+, which does not match any string of the form \(a^N\), but requires the system to check \(2^N\) decomposition of the pattern to reach that conclusion, where \(N\) is the length of the target string. **XML Bomb** [3]. An XML bomb (or Billion-Laughs attack) is a malicious XML document that contains layers of recursive data definitions\(^1\), resulting in quadratic resource consumption: a 10-line XML document can easily expand to a multi-gigabyte memory representation and consume an inordinate amount of CPU time and memory on the server. Under normal operation, a load of 500 legitimate requests per second are served in less than 10 milliseconds each; under a low-volume attack of 10 XML bombs per second, the latency jumps up to more than two seconds. An XML bomb affects any serialization format that can encode references (e.g., YAML, but not JSON). **Improper (de-)serialization** [47, 52, 53]. This class of attacks encompasses those where malicious code can be injected into running services. These vulnerabilities are, unfortunately, common in practice, and they allow malicious users to, for instance, inject a `for (;;) {}` loop to stall a process indefinitely. **Event-handler Poisoning (EHP)** [14]. Attacks like the preceding can be additionally amplified in an event-driven framework. In event-handler poisoning, attackers exploit the blocking properties of event-driven frameworks so that, when a request unfairly dominates the time spent by an event handler, other clients are further blocked from proceeding. Any slowdown, whether it is in the service itself or in its recursive layers of third-party libraries can contribute to this head-of-line blocking. ### 2.2 Design Goals The attacks in the previous section highlight several goals that drive FINE LAME’s design (§4) and implementation (§5). **In-flight Detection.** Actions often need to be taken while the offending requests are “in the work”—for example, when a single request can bring the system down (e.g., cooperative scheduling) or when subsequent defenses cannot be deployed (e.g., IP spoofing). DoS detection needs to catch such requests before they leave the system, by monitoring resource consumption at a very fine temporal and spatial granularity. **Resource Independence.** ADoS attacks may target arbitrary system-level resources (CPU, memory, storage, or networking), and may even target multiple resources (i.e., multi-vector attacks). A desirable solution needs to be agnostic to the resource and able to handle any instance of inordinate consumption. **Cross-component Tracking.** Given the complex structure of modern applications, ADoS attacks can also cross the boundaries of the application’s internal components or processing phases. For instance, if a request causes the triggering of a timeout to an event queue, resources consumed by the initial request parsing and the timeout should both be attributed to the same request. **Language Independence.** Applications today combine several ready-made libraries, which are written in multiple programming languages and often available only as compiled binaries. Thus, DoS detection should remain agnostic to the application details such as the programming language, language runtime, and broader ecosystem (e.g., packages, modules). **Minimal Developer Effort.** Detection needs to impose minimal burden to developers and devops, who should benefit from DoS mitigation without having to study the application internals. Rather than presenting developers with an overabundance of configuration knobs, a DoS detection system should direct precious human labor at sprinkling applications with key semantic information utilized at runtime. ### 3 Threat Model To be more concrete, FINE LAME assumes the following about the attacker and the broader environment. **Threats.** We consider a powerful remote attacker that (i) can send arbitrary requests to a service hosting a vulnerable application, (ii) has control over potentially all of the application’s legitimate clients, and (iii) is aware of the application’s structure and vulnerabilities, including exploits in its dependency tree. We do not distinguish between legitimate and malicious clients who intersperse harmful requests that attack resources with one or more benign requests. Specifically, any subset of hosts can send any number of requests that may or may not attack any subset of resources. We do not limit resources of interest to CPU; attackers can target memory, file descriptors, or any other limited resource in the host system. That means that attacks can take the form of a single client attempting to consume 100% of the CPU indefinitely, or of multiple attacks from multiple clients over many of the system’s resources. **Assumptions.** We assume (i) vulnerable but not actively malicious code, and (ii) that FINE LAME sees at least some benign traffic. If all traffic is malicious from the beginning, in-flight detection and mitigation become less urgent, as anomalies become the norm, and the application owners should first --- \(^1\) For example, the first layer consists of 10 elements of the second layer, each of which consists of 10 elements of the third layer, and so on. revise their deployment pipeline. We also assume that the resource utilization of request processing can be attributed to a single request by the end of each processing phase, even if the processing phases is split into multiple phases across different application components. As keeping a reference to the originating request is a natural design pattern, in all of the services we tested, a unique identifier was already available; in cases where there is no such identifier, one must be added, and we detail how to do so in section 4. 4 FINELAME Design Figure 2 depicts the overall design of FINELAME. Conceptually, FINELAME consists of three main components: - **Programmer annotations** that mark when a request is being processed. FINELAME requires only a few annotations, even for complex applications, to properly attribute resource utilization to requests. - **Fine-grained resource monitors** that track the resource utilization of in-flight requests at the granularity of context switches, mallocs, page faults. - **A cross-layer anomaly detection** model that learns the legitimate behavior and detects attacks as soon as they deviate from such behavior. Programmers can use FINELAME by annotating their application with what we call request-mappers. These annotations delineate, for each component and processing phase, the start and end of processing, as well as the request to which resource utilization should be attributed. For example, in an event-driven framework, the beginning and the end of each iteration of the event handler loop should be marked as the start and the end of a request’s processing, respectively. At runtime, when FINELAME is installed on the host environment, FINELAME attaches small, low-overhead resource monitors to particular points in the application or operating system. The aforementioned request-mappers enable FINELAME to determine the request to which the resource consumed by a thread or process should be credited. In section 5, we detail our out-of-the-box FINELAME library of request-mappers and resource monitors for several popular cloud frameworks. Our library tracks the utilization of a range of key OS-level resources; however, programmers can further extend it with user-level resource monitors to track application-specific resources (e.g., the occupancy of a hash table). Finally, FINELAME’s monitoring data is used to perform lightweight, inline anomaly detection. Resource monitors first feed data to a machine learning model training framework that computes a fingerprint of legitimate behavior. Parameters of the trained model are installed directly into the resource monitors, which evaluate an approximation of the model to automatically detect anomalous behavior on-the-fly. The end result of FINELAME is a system for high-accuracy, fine-grained, and general ADoS attack detection. 4.1 Request-mapping in FINELAME Conceptually, there are three operations in request mapping: - **startProcessing()**: This annotation denotes the start of a processing phase. Any resource utilization or allocations after this point are attributed to a new unique request. - **attributeRequest(reqId)**: As soon as we can determine a unique and consistent request identifier, we map the current processing phase to that request. For instance, when reading packets from a queue, if the best consistent identifier for a packet is its 5-tuple, resource tracking would start as soon as the packet is dequeued, but would only be attributed to a consistent request ID after Layer-3 and Layer-4 processing are completed. In general, attributeRequest(reqId) is called directly after startProcessing(), and depending on the specific of the application, the two can sometimes be merged (§ 5). - **endProcessing()**: Finally, this operation denotes the completion of processing, indicating that subsequent utilization should not be attributed to the current request. In order for the resource monitors to properly attribute utilization to a request, FINELAME requires programmers to annotate their applications using the above three request mapping operations. Ideally, the annotations should cover as much of the code base as possible; however, not all resource utilization can be attributed to a single request. In such cases, programmers have flexibility in how they perform mapping: for true application overhead—rather than request processing overhead—utilization can remain unattributed, and for shared overhead (e.g., garbage collection), utilization can be partitioned or otherwise assigned stochastically. Every request is given an identifier that must be both unique and consistent across application components and processing phases. This identifier is used to maintain an internal mapping between OS entity (process or thread) and the request. Example identifiers include the address of the object representing the request in the application, a request ID generated by some application-level tracing solution [7, 20, 29, 34, 45, 49, 55], or a location in memory if the request is only processed once. From the moment a startProcessing annotation is called to the moment the endProcessing annotation is called, FINELAME will associate all the resources consumed by the OS entity to the request. An optimization of this technique can be implemented when the application lends itself naturally to such mapping between OS entity and request. For instance, event-driven frameworks or thread-pool based services usually have a single or small number of entry points for the request, to which FinELAME can readily attach request-mappers via eBPF without source code modification. We found this optimization to be the common case, and FinELAME does not require any modification to the application we explore in section 6. 4.2 Resource Monitoring in FinELAME Resource tracking between startProcessing and endProcessing annotations are done via a recent Linux kernel feature called eBPF. We first provide some background on the operation of eBPF, and then discuss how we utilize it to perform extremely fine-grained resource monitoring of in-flight requests. 4.2.1 Background on eBPF The original Berkeley Packet Filter (BPF) [35] has been a long-time component of the Linux kernel networking subsystem. It is a virtual machine interpreting a simple language traditionally used for filtering data generated by kernel events. Notable use cases are network packets parsing with Tcpdump [56] and filtering access to system calls in the seccomp facility. In version 3.0 a just-in-time compiler was implemented, allowing for a considerable speedup of the processing of BPF programs by optimizing them on the fly. In version 3.15, Alexei Starovoitov significantly extended BPF (dubbing the new system “eBPF”). The new version has access to more registers and an instruction set mimicking a native RISC ISA, can call a restricted subset of kernel functions, and can share data from kernel-space to user-space through hash-like data structures. While eBPF is a low-level language, users can write programs in higher languages such as C (and even Python with the BCC project [2]) and generate eBPF code with compilers such as GCC and LLVM. Generated programs are verified before being accepted in the kernel. The verifier imposes a set of strict constraints to eBPF programs to guarantee the safety of the kernel. Common constraints include the absence of floating point instructions, a limit of 4096 instructions per program, a stack size capped at 512 Bytes, no signed division, and the interdiction of back-edges in the program’s control flow graph (i.e., no loops). The ability of eBPF programs to be attached to both kernel and user-space functions and events, their extremely low overhead, and their ability to share data with user space without the need for any IPC or queuing mechanism make eBPF a prime candidate for implementing resource monitors in FinELAME. 4.2.2 Resource Monitor Architecture FinELAME’s resource monitors are attached to various user- and kernel-space data sources (e.g., the scheduler or TCP stack) and use the mapping described in section 4.1 to associate resource consumption to application-level workflow (e.g., HTTP requests). A resource monitor requires the following information: the type and name of the data source, and potentially the path of its binary. Our current prototype of FinELAME uses the features listed in Table 1. When executed, most resource monitors operate under the following sequence of actions: i) verify whether a request mapping is active for the current PID and exit if not; ii) collect the metric of interest (usually through the arguments of the function triggering it) and store it, time-stamped, in a shared data structure; and iii) perform anomaly detection on the request if the model’s parameters are available (see section 4.3). The time a request spends executing instructions on a processor is represented by cputime. We instrument both the scheduler_tick() and the finish_task_switch() kernel functions, which are called at every timer interrupt and context switch, respectively, to either start a timer when a thread executing a registered request is scheduled for execution or collect the amount of CPU time consumed by the task swapped out. We instrument the tcp_sendmsg() and tcp_recleanbuf() to collect tcp_sent and tcp_recv, the amounts of bytes sent and read from a TCP connection, respectively. To compute tcp_idle_time, which represents the period of inactivity from the sender on a TCP connection, we measure the time elapsed between two occurrences of tcp_cleanup_rbuf(). To monitor the heap memory consumption occasioned by the processing. of a request, we monitor the glibc malloc function. Applications where memory management is partly handled by the runtime (such as in Python) can be monitored in a similar fashion. Likewise, the model can be generalized to garbage collected languages. Finally, we monitor the page fault events in the application by attaching a resource monitor to the exception: page_fault_user kernel tracepoint. We observed in our evaluation that CPU time was the best discriminant for CPU based attacks, while connection idle time the best for slow attacks (such as Slowloris and RUDY). The above default, general-purpose resource monitors in FINE LAME are sufficient for a large set of existing applications; however, it can be extended to all the kernel events available for tracing and probing, as well as user-level functions (to monitor application-level metrics). If any application-level metrics are required (such as data structure occupancy, counters, and so on), programmers can augment our resource monitors with custom eBPF programs attached to arbitrary probe points in either kernel- or user-space. ### 4.3 Attack Detection in FINE LAME **Detection algorithm.** For fast detection, FINE LAME is designed to enable anomaly detection as close as possible to the resource allocation mechanism. Without a method for in-flight anomaly detection in addition to mechanisms for in-flight resource tracing, detection and mitigation of in-flight requests would not be possible. This detection problem can be reduced to quantizing the abnormality of a vector in n-dimensional space. Once a sufficient amount of data has been gathered to compute a fingerprint of the legitimate requests’ behavior, we can train an anomaly detection model. The model can span all the metrics collected by the resource monitors, allowing us to detect abuse on any of the resources of the system as well as cross-resource (multi-vector) attacks. For the unsupervised version of this problem, the most popular methods take one of two approaches: distance-based or prediction-based. The former family of models aims to cluster known, legitimate data points and compute the distance of new data points to those clusters—distance that is used to quantify the anomaly. The latter family assumes the existence of a set of input data points that are correct, and learns a function representing those points. When a new point enters the system, the model computes the value of the learned function; the prediction error is then used to quantify the degree of <table> <thead> <tr> <th>Name</th> <th>Description</th> <th>Event</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>tcp_idle_time</td> <td>Inactivity time on a TCP connection</td> <td>tcp_cleanup_rbuf</td> <td>kernel probe</td> </tr> <tr> <td>tcp_sent</td> <td>Bytes sent through TCP connections</td> <td>tcp_sendmsg</td> <td>kernel probe</td> </tr> <tr> <td>tcp_rcvd</td> <td>Bytes received through TCP connections</td> <td>tcp_cleanup_rbuf</td> <td>kernel probe</td> </tr> <tr> <td>cputime</td> <td>Amount of CPU time consumed</td> <td>scheduler_tick, finish_task_switch</td> <td>kernel probe</td> </tr> <tr> <td>malloc_memory</td> <td>Bytes allocated through the malloc function</td> <td>glibc_malloc</td> <td>user probe</td> </tr> <tr> <td>page_faults</td> <td>Number of page faults events</td> <td>exceptions:page_fault_user</td> <td>kernel tracepoint</td> </tr> </tbody> </table> Tab. 1: Default resource monitors in FINE LAME. ``` Required data structures: FPAS # FPA scaling factor pid_to_rid # OS carrier to request req_points # Request profiles model_params # K-means parameters dp_dists # Distances to centroids thresholds # Alerts cut-off bar fun resource_monitor(context): rid = pid_to_rid.get(pid) if (rid): ts = get_timestamp() metric = context.get_arguments() dp = req_points.get(rid) if (dp): dp.update(metric, ts) else: dp = init_dp(rid, metric, ts) req_points.insert(dp) mu, sigma = model_params.get() if (mu & & sigma): metric_scaled = metric << FPAS metric_scaled -= mu if metric_scaled < 0: metric_scaled *= -1 metric_scaled /= sigma metric_scaled *= -1 else: metric_scaled /= sigma min_dist, closest_k #pragma loop unroll for k in K: current_dist = dp_dists.get(dp, k) new_dist = metric_scaled+current_dist dp_dists.update(dp, new_dist) if {new_dist < min_dist): min_dist = new_dist closest_k = k P = thresholds.get(closest_k) if new_dist > t: report(rid, dp, s) ``` Fig. 3: FINE LAME anomaly detection. Pseudocode for FINE LAME’s inline anomaly detection. Because of the training complexity, prediction complexity, and required training data, many existing solutions in both distance-based and prediction-based categories are impractical to execute at fine granularity. For instance, the popular algorithm DBSCAN [18] is not suitable for FineLAME, as it requires us to evaluate the distance of new data points to all the possible “core” data points in the model. The amount of data points considered (and therefore the size of the model) is usually linearly proportional to the size of the training set. Some accurate approximations of DBSCAN have been proposed [22], but even with a small number of clusters, almost all of the training dataset still needs to be part of the model. Likewise, the performance of prediction-based models made on neural networks, such as Kitsune [38], is highly dependent on the depth and width of the model. The amount of parameters of such networks grows exponentially with the number and size of the hidden layers. Given the above concerns, we chose to implement anomaly detection in FineLAME with K-means, a technique that allows us to summarize the fingerprint of legitimate requests with a small amount of data. In K-means, the objective function seeks to minimize the distance between points in each cluster. The model parameters are then the centroids and distribution of the trained clusters. In a typical use-case scenario, FineLAME is configured to perform only request monitoring for a certain amount of time, after which it trains K-means on the monitoring data gathered in user-space from the resource monitors shared maps. In practice, we found that a K value equal to the number of request types in the application yields a reasonable estimation of the different behaviors adopted by legitimate requests, while being a number low enough such to contain FineLAME’s overhead. **Model training and deployment.** Gathering the training data is done by a simple look-up from the user-space agent to the shared eBPF maps holding the requests resource consumption data. Using those profiles, the user-space agent standardizes the data (center to 0 and cast to unit standard deviation). Subsequently, the agent trains K-means to generate a set of centroids representing the fingerprint of the good traffic. The parameters of the model, to be shared with the performance monitors, are then the cluster centroids, as well as the mean µ and standard deviation σ of each feature in the dataset, and a threshold value τ statistically determined for each cluster. As described above, the performance monitors have limited computing abilities and do not have access to floating point instructions. Thus, they are designed to perform fixed point arithmetic in a configurable shifted space, and require FineLAME’s to shift the model parameters in this space before sharing them. Using two precision parameters a and b, each datapoint is transposed in a higher space $10^a$, and normalized such that the resulting value lies in an intermediate space $10^{a-b}$, retaining a precision of $a-b$ digits. This means that during the normalization operation each parameter value $x$ undergoes the following transformation: $x = \frac{(x \times 10^a) - \mu \times 10^a}{\sigma + 10^a}$. Once standardized, the clusters’ centroids as well as each feature’s mean and standard deviation are shared with the resource monitors through eBPF maps. Upon availability of those parameters, the resource monitors update not only the resource consumption of existing requests, but also their outlier scores, a measure we use to quantify the degree of anomaly of a request. Due to the constraints imposed on eBPF programs—specifically, taking a square root is complex as we do not have access to loops—we choose the normalized L1 distance to the closest cluster as the outlier score. While being a crude measure, the L1 is equivalent to more complex norms as resource vectors are of finite dimension. It preserves information about which resource is abused, and it lets us set statistical thresholds to determine cut-off points used for flagging abnormal requests. The algorithm for this entire process is shown in Figure 3. Finally, we note that because FineLAME is primarily designed toward the detection of resource exhaustion attacks, we allow the anomaly detection engine to maintain signed values for outlier scores. This means that requests that have not reached their expected legitimate amounts of resource consumption, and that would look abnormal in an absolute value setting, are not flagged as such. This is important because it highlights the fact that FineLAME is not geared toward volumetric attacks that aim to bring the system down with a vast amount of low consumption requests. ## 5 Use Cases and Implementation To demonstrate the generality of FineLAME and the minimal developer effort required to use it, we apply FineLAME to three web platforms: Apache [1], which is estimated to serve ~40% of all active webpages; Node.js [4] a popular server-side JavaScript-based web server; and DeDoS [15] an open source component-based framework for building web services. Our prototype of FineLAME is available on [https://github.com/maxdml/Finelame](https://github.com/maxdml/Finelame). Table 2 quantifies the programming effort required to write request-mappers for those three applications to use FineLAME. **Apache web server.** Primarily written in C, Apache’s request processing is implemented by Multi-Processing Modules (MPM). In the latest versions of Apache (2.x), requests are served by multiple processes which can have multiple <table> <thead> <tr> <th>Application</th> <th>Request mapping probes</th> <th>SLOC</th> </tr> </thead> <tbody> <tr> <td>Apache</td> <td>5</td> <td>41</td> </tr> <tr> <td>Node.js</td> <td>9</td> <td>64</td> </tr> <tr> <td>DeDoS</td> <td>2</td> <td>21</td> </tr> </tbody> </table> ### Tab. 2: Intrusiveness of FineLAME, quantified. worker threads themselves; each thread handles one connection at a time. When a request enters the system, an application-level (conn) object is created by the core_create_conn function to contain it before the request is dispatched to a worker thread. Subsequently, the request is processed by either the ap_process_http_sync_connection or the ap_process_http_async_connection functions, which take the conn object as argument. From FinELAME, we attach one request-mapper to core_create_conn, and two requests-mappers to the http processing functions, one over a uprobes called upon entering the function, the other over a retprobes called when returning from it. We exploit the conn object to generate a unique identifier for each request and map it to the underlying thread worker, so that resource monitors can later gather resource consumption data on the request’s behalf. The mapping is undone when the function returns and the request exits the system. When a worker thread executes a new request, the request-mapper updates the mapping with the new request’s ID. This solution requires no modification to the Apache source code, and 41 lines of eBPF code over 5 probes. Node.js required more slightly more instrumentation due to its asynchronous model, which offloads work to a worker pool (implemented with libuv [30]). The instrumentation required eBPF probes to be attached to seven user-space functions within the libuv library. As in Apache, we found a data structure—struct uv_stream_t—that could (i) be used to generate a unique identifier, and (ii) was carried consistently across the disparate components of the framework. Request-mappers were applied to the seven libuv functions as follows: - `uv_accept`: a new request is initialized, and is associated with the `uv_stream_t` structure that handled communication with the client. - `uv__read` and `uv__write`: the request associated with the client’s stream is assigned to the current thread for the duration of the function. - `uv__work_submit`: the request assigned to the current thread is associated with a work-request submitted to the worker pool. - `uv__fs_work`, and `uv__fs_done`: the request associated with the work-request is assigned to the current (worker) thread. - `uv_async_send`: the request is unassigned from the current thread. Again, this solution requires no changes in Node.js source code, only knowledge of which functions are processing requests. The request-mappers totalized 64 lines of eBPF code. **DeDoS** is an event-driven framework where programmers write and deploy their application as software components that are automatically allocated and deallocated based on demand. Each of those components monitor a local event-queue from which new requests are consumed. Unifying the disparate components is a generic event-handling function (receive()). Programmers implement their component’s functionality inside this event-handling function. DeDoS provides request tracing and explicitly tracks the passing of requests between components. We chose DeDoS as a proof-of-concept proxy for micro-service, event-driven applications providing request tracing capability. In these types of applications, annotation is simple as FinELAME can maintain a direct mapping between the application-level unique request identifier and the event handler’s thread PID in order to track resource consumption across component boundaries. FinELAME traces only the receive() function class with request mappers, and does not require modifications to the framework. The request-mappers require 21 lines of eBPF code. ### 6 Evaluation In this section, we present our evaluation results of FinELAME. Our evaluation is centered around the following aspects of the system: - **Overhead.** The overhead of FinELAME compared to no monitoring, or in-application instrumentation - **Accuracy.** The ability of FinELAME to accurately detect real attacks never seen yet by the application #### 6.1 Experimental setup We present the setup on which we evaluate both the overhead and accuracy aspects of FinELAME. In all cases, the server applications are running on a 12 cores Xeon Silver 4114 at 2.20GHz , while our legitimate and attack clients are running on an Intel Xeon E5-2630L v3 at 1.80GHz. Both server and client machines have a total of 62G of RAM, and have hyper-threading and DVFS disabled. We use version 2.4.38 of Apache, and configure it to use 50 worker threads. We use version 12.0.0 – pre of Node.js with the default configuration of 4 worker threads for libuv. Both Apache and Node.js are configured to serve a set of Wikipedia [59] pages. Node.js parses a regular expression provided in the request’s URI to find the path of the file to serve. It’s parser, `liburi`, is vulnerable to the ReDoS attack. All the applications impose a timeout of 20 seconds on connections. We deploy a simple webserver in DeDoS which can process three types of requests: serve a Wikipedia article, process a randomly generated XML file uploaded in a POST request, and parse a regular expression. The server is decomposed into several software components: socket reading, HTTP parsing, file serving, XML parsing, regular expression parsing, and response writing. The XML parser is implemented with libxml2, which is vulnerable to the Billion Laughs attack. Our good traffic is generated by Tsung [6] and explores evenly all the servers’ exposed endpoints; bad traffic is generated by an in-house C client for the ReDoS and Billion Laughs attacks, and pylorys [23] for the Slowloris attack. Tsung generates load under an exponential distribution centered on a configurable mean, while our attack client is configured to send a fixed load. ### 6.2 Overhead of FINELAME Figures 4 presents the overheads incurred by FINELAME’s instrumentation on Apache, Node.js and DeDoS. In all of our experimental setups, we evaluate the legitimate client latency experienced when the server is not instrumented, when it is instrumented by FINELAME, and when FINELAME’s resource monitors are also performing anomaly detection (FINELAME +). The load is as described earlier in sec 6.1, and explore all the instrumented paths in the applications. We also evaluate the cost of instrumenting the DeDoS framework itself to evaluate FINELAME overheads compared to a traditional user-space solution. The bars plot the median of the clients latency, and all our experiments are run thrice for a period of 100 seconds. In the case of Node.js the instrumentation cost adds 8.55% overheads and adding anomaly detection 9.21%. In the case of Apache, FINELAME adds 11.38% and 11.72% overheads respectively. In the case of DeDoS, the baseline latency is higher than with the two previous services, due to the fact that the application is not only serving files but also parsing POST requests, and also the framework is less optimized than the two battle-tested Apache and Node.js. Instrumenting directly the framework comes with an overhead of 2.9%, while FINELAME comes with 4.23% overheads, 6.3% if also performing anomaly detection. In general we observe that the overheads incurred by FINELAME are higher when the baseline processing time of the service is low, and does not grow linearly with the complexity of the application. In addition, we found that performing anomaly detection in addition to monitoring resource consumption almost comes for free. ### 6.3 Performance of FINELAME https://www.overleaf.com/project/5c22751775031d099f528e64 Our performance evaluation of FINELAME is centered around its ability to detect attacks requests before they exit the system, while providing accuracy competitive with non-approximated user-level algorithms. ### 6.3.1 Attacks Our experiments aim to quantify the impact of attacks on quality of service. Consequently, we tune attacks strength such that they will not bring down the server but rather degrade the quality of service provided to legitimate users. **ReDoS:** This attack consist of specially crafted regular expressions which are sent to the server for processing. The strength of the attack grows exponentially with the number of malicious characters present in the expression. Because the application processing units are busy handling those requests, legitimate requests get queued for a longer period of time, and ends-up being responded to more slowly. **Billion Laughs:** The attack consists of XML files filled with several levels of nested entities. The parsing cost is exponentially proportional to the depth of the document. The impact is similar to the ReDoS attack. **SlowLoris:** The attack consists in maintaining open connections to the server, keeping them alive by sending individual HTTP headers at a regular interval smaller than the server’s timeout, but never completing the request—we assume that the attacker is able to probe the service and discover this timeout. As a result, the server’s connection pool gets exhausted, and it can’t answer new requests. This technique can also implement a dormant attack which cripples the ability of the server to handle surges of legitimate traffic, by denying a fraction of the total connection pool. ### 6.3.2 Anomaly Detection Performance **Evaluation metrics** As is common with anomaly detectors, the output of FINELAME is a score which quantifies the abnormality of a request. This score is then either used as a raw metric for mitigation algorithms, or compared against a threshold $\tau$ to be transformed into a binary variable where 0 means negative (no anomaly), and 1 means positive (attack). With $\tau$ set, and using our knowledge of the ground truth, we can determine the accuracy of each of the detector’s outputs as true/false positive/negative. The choice of $\tau$ is crucial, as too low a value can result in a large amount of false positive, while too high a value can induce a large amount of false negative. For our experiments, we set $\tau$ to be the outermost point for each cluster in the training set, i.e., the most consuming legitimate request we’ve seen so far for the cluster. The challenge associated with deriving a large $\tau$ from the training traffic is that attacks can now take longer to detect—and might not be detected at all if they are too weak. This latter case does not concern us, because to bring down the system with weaker attacks, an attacker would be forced to change its method from asymmetric to volumetric. The benefit of a higher $\tau$ is that it helps decreasing the False Positive Rate (FPR, $\frac{FP}{TN+FP}$), a desirable behavior for operators using the system. For our experiments, we present the True Positive... In our first experiment, we attack Node.js with three strengths of ReDoS. In the two first experiments, the workload is made of 98% of benign requests and 2% of malicious regular expressions blocking the event loop of the server (about 500 and 10 r/s, respectively). In the third experiment, with the strongest attack, we reduce the attack rate to 1 r/s, such that the attack does not bring down the server. Legitimate requests are served in about 0.8ms on average under normal conditions, but get delayed in proportion of the intensity of the ReDoS requests when the attack starts. During the first attack, bad requests are served in 23ms on average, a 28.75× increase compared to normal requests. Good requests are also penalized and are served in about 4ms. During the second attack, bad requests are served in 45.6ms on average, a 57× increase compared to normal requests. Legitimate requests are affected and incur an average latency of 13.5ms. During the third attack, bad requests are served in 90.9ms on average, a 113.6× increase. Legitimate requests incur an average latency of 6ms. Due to its ability to credit requests’ resource consumption at the granularity of context switches, in both experiments, FINELAME is able to detect attack requests before they exist the system, at least 80.9% earlier for 50% of the bad traffic, and up to 95.3% earlier. The user-space, non-approximated evaluation of k-means using the L2 norm for measuring distances, perform only marginally better. **Billion Laughs:** In this experiment, we attack DeDoS with two different strengths of Billion Laughs (XML bomb) requests. The good traffic follows a diurnal pattern, oscillating between 250 and 750 requests per second. Under normal conditions, legitimate requests are served in 6.87ms on average. In the first experiment, we send 15 malicious requests per seconds (about 2% of the peak legitimate traffic, and 6% of the lower phase), which are served in 29.28ms on average, a 4.26× increase in response time. In the second experiment, we decrease the number of bad requests to one per second (about 0.1% and 0.4% of the peak and low traffic, respectively), and increase their intensity such that they are served in 203ms in average (an order of magnitude increase compared to the first case), which represents a 29.55× increase. in load compared to legitimate requests in normal conditions. For the weaker attack, FINELAME is able to detect malicious requests 78.83% faster than the user-space solution, at least 50% of the time, and up to 97% faster for the strongest attack. **SlowLoris**: In this experiment, we configure Apache to handle requests with 25 worker threads, and timeout on reading HTTP headers after 20 seconds. We configure the attack client to maintain 5 connections to the server opened at all times, refreshing it every 5 seconds. Effectively, this drives the tcp_idle_time of the malicious request high and makes them standout from the legitimate ones. This attack is “all or nothing”, in the sense that it will not impact the legitimate requests until the connection pool gets exhausted. FINELAME’s is able to detect the abnormal idle time about 75% faster than the application \((1 - \frac{5}{20} + 100)\), which would have otherwise to experience the timeout before reporting the request. ### 7 Related Work **Volumetric Attack Detection** There is a large body of work addressing volumetric DoS attacks [10, 26, 31, 40, 60–62], including attacks that target the network [27, 28, 54]. As described earlier (§1), these systems do not protect against asymmetric DoS attacks, a concern shared by both industry [32, 50] and academia [13, 14, 51]. **Application-based Detection** Prior works on application-layer DoS detection either depend heavily on repeated outliers, or are often deeply tied to a specific application. Techniques include comparing the entropy of offending and legitimate traffic [39, 63], sampling traffic flows [25], and sketch-based feature-dimensionality reduction [58]. While these techniques work well for volumetric attacks, they have self-assumed limitations when the attack traffic is low—the primary focus of this paper. **DSHIELD** [44] is a system that assigns “suspicion scores” to user sessions based on their distance from legitimate sessions. While similar in nature to FINELAME’s anomaly detection technique, it relies on the operator knowing all the possible classes of requests that the server can process. FINELAME anomaly detection engine learns on legitimate requests so that it does not depend on *a priori* knowledge of execution paths or vulnerabilities. BreakApp [57] is a module-level compartmentalization system that attempts to defend against DoS attacks, among other threats stemming from third-party modules. While BreakApp’s capabilities increase with more and smaller modules, FINELAME works even with monolithic applications entirely developed as a single module. BreakApp’s mitigation uses simple queue metrics (i.e., queue length at the module boundary vs. replica budget), whose cut-off parameters are statically provided by the programmer; FINELAME uses a more advanced learning model, which parameters are adjusted at runtime. Rampart [36] focuses on asymmetric application-level CPU attacks in the context of PHP. It estimates the distribution of a PHP application function’s CPU consumption, and periodically evaluates running requests to assess the likelihood they are malicious. It then builds filters to probabilistically drop offenders—repeated offenders increase their probability of being filtered out. While FINELAME profiles legitimate requests resource consumption, it is not limited to CPU-based attacks. It also works with applications with components built with many different languages. **In-kernel Detection** Recent work has shown good results for mitigating ADoS attacks by exploiting low level system metrics. Radmin [16] and its successor Cogo [17] train Probabilistic Finite Automatas (PFAs) offline for each resource of a process they want to monitor, then perform anomaly detection by evaluating how likely the process’ transition in the resource space is. Training the PFAs requires days in Radmin, and minutes in Cogo, while FINELAME can train accurate models in seconds or hundreds of microseconds. We expect this capability to be helpful in production systems where the model has to be updated, *e.g.*, to account for changes in an application’s component. In addition, Cogo reports detection time in the order of seconds, while FINELAME’s inline detection operates at the scale of the request’s complexity—milliseconds in our experiments. Lastly, Radmin/Cogo operate at the granularity of processes/connections. FINELAME assumes a worst-case threat model where malicious requests are sent sporadically. by compromised clients, and thus operate at this granularity. Per-request detection has the added benefit to enable precise root cause analysis, further enhancing the practicality of FinELAME. **Programmer Annotations** Prior work proposes an annotation toolkit that programmers can use in their code to specify resource consumption requirements [42]. The framework detects connections that violate the provided specification (and then attempts to mitigate by rate limiting or dropping them). Unfortunately, it requires knowledge of the application internals. Worse even, it expects developers to understand the program’s expected resource consumption quite accurately. Moreover, such a hard cut does not distinguish between occasional consumption that is slightly above limits and true attackers. **Prevention-as-a-Service** A recent vein of work proposed “Attack prevention as a Service”, where security appliances are automatically provisioned at strategic locations in the network [19, 37]. Those techniques are largely dependent on attack detection (to which they do not provide a solution), and thus are orthogonal to our platform, which operates directly at the victim’s endpoint. **Performance anomaly detection** ADoS attacks are a subset of the broader topic of performance degradation, a topic that has been extensively studied. Magpie [9] instruments an application to collect events from the entire stack and obtain request profiles post-mortem. X-trace [21] is a tracing framework that preserves causal relationship between events, and allow the offline reconstruction of request trees. X-ray [8] builds on taint-tracking to provide record and replay system to summarize the performance of application events offline. One of FinELAME’s key difference with those systems is its lightweight in-flight profiling technique, which allows us to perform anomaly detection while the request is still in the system. Retro [33] provides a tracing architecture for multi-tenant systems that enables the implementation of resource management policies. While its architecture is similar to FinELAME’s, its focus is on performance degradation caused by competing workloads, rather than the detection of degradation within a single application. While the impact can be similar, we note that for ADoS attacks, in-flight request tracking is critical to timely detection and mitigation. ## 8 Conclusion In this paper, we describe and evaluate FinELAME, a novel fine-grained application-level DoS detection framework. FinELAME is designed for interaction with modern distributed applications, operates orders of magnitude faster than previous techniques, and is able to detect yet-unseen attacks on an application. FinELAME is enabled by recent advances in the Linux kernel, and bridges the gap between application-layer semantic and low-level resource allocation sub-systems. It is a first step toward deploying complex machine learning applications for fine grained services, in an era where the size of services is shrinking (micro/pico-services). ## 9 Acknowledgments We would like to thank our shepherd, Mike Reiter, and the anonymous ATC reviewers for their useful feedback. This material is based upon work supported in parts by the Defense Advanced Research Projects Agency (DARPA) under Contracts No. HR0011-16-C-0056 and No. HR001117C0047, and NSF grants CNS-1513687, CNS-1513679, CNS-1563873, CNS-1703936 and CNS-1750158. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA or NSF. ## References [30] libuv. A multi-platform support library with a focus on asynchronous i/o. [56] Vern Paxson Steven McCanne Van Jacobson, Sally Floyd. Tcpdump, a command-line packet analyzer. [59] wikipedia. Wikipedia, the free encyclopedia.
{"Source-Url": "https://www.usenix.org/system/files/atc19-demoulin_0.pdf", "len_cl100k_base": 11622, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 54562, "total-output-tokens": 16570, "length": "2e13", "weborganizer": {"__label__adult": 0.0005517005920410156, "__label__art_design": 0.0006113052368164062, "__label__crime_law": 0.00334930419921875, "__label__education_jobs": 0.0008077621459960938, "__label__entertainment": 0.00025963783264160156, "__label__fashion_beauty": 0.0002541542053222656, "__label__finance_business": 0.000385284423828125, "__label__food_dining": 0.0004551410675048828, "__label__games": 0.0016956329345703125, "__label__hardware": 0.002635955810546875, "__label__health": 0.0008640289306640625, "__label__history": 0.0005064010620117188, "__label__home_hobbies": 0.00012934207916259766, "__label__industrial": 0.0007038116455078125, "__label__literature": 0.0004854202270507813, "__label__politics": 0.0006527900695800781, "__label__religion": 0.0005478858947753906, "__label__science_tech": 0.271240234375, "__label__social_life": 0.00016868114471435547, "__label__software": 0.056854248046875, "__label__software_dev": 0.65576171875, "__label__sports_fitness": 0.0003275871276855469, "__label__transportation": 0.0005016326904296875, "__label__travel": 0.0002276897430419922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69176, 0.02581]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69176, 0.36524]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69176, 0.875]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 4492, false], [4492, 9035, null], [9035, 14642, null], [14642, 20013, null], [20013, 24322, null], [24322, 29022, null], [29022, 34928, null], [34928, 40036, null], [40036, 45593, null], [45593, 47918, null], [47918, 52407, null], [52407, 57465, null], [57465, 61999, null], [61999, 66497, null], [66497, 69176, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 4492, true], [4492, 9035, null], [9035, 14642, null], [14642, 20013, null], [20013, 24322, null], [24322, 29022, null], [29022, 34928, null], [34928, 40036, null], [40036, 45593, null], [45593, 47918, null], [47918, 52407, null], [52407, 57465, null], [57465, 61999, null], [61999, 66497, null], [66497, 69176, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69176, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69176, null]], "pdf_page_numbers": [[0, 0, 1], [0, 4492, 2], [4492, 9035, 3], [9035, 14642, 4], [14642, 20013, 5], [20013, 24322, 6], [24322, 29022, 7], [29022, 34928, 8], [34928, 40036, 9], [40036, 45593, 10], [45593, 47918, 11], [47918, 52407, 12], [52407, 57465, 13], [57465, 61999, 14], [61999, 66497, 15], [66497, 69176, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69176, 0.04906]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
c2e037e8ea8d55aaeabb46892c56bb40162fb97e
Work stealing is a widely-used scheduling technique for parallel processing on multicore. Each core owns a queue of tasks and avoids idling by stealing tasks from other queues. Prior work mostly focuses on balancing workload among cores, disregarding whether stealing may adversely impact the owner’s performance or hinder synchronization optimizations. Real-world industrial runtimes for parallel processing heavily rely on work-stealing queues for scalability, and such queues can become bottlenecks to their performance. We present Block-based Work Stealing (BWoS), a novel and pragmatic design that splits per-core queues into multiple blocks. Thieves and owners rarely operate on the same blocks, greatly removing interferences and enabling aggressive optimizations on the owner’s synchronization with thieves. Furthermore, BWoS enables a novel probabilistic stealing policy that guarantees thieves steal from longer queues with higher probability. In our evaluation, using BWoS improves performance by up to 1.25x in the Renaissance macrobenchmark when applied to Java G1GC, provides an average 1.26x speedup in JSON processing when applied to Go runtime, and improves maximum throughput of Hyper HTTP server by 1.12x when applied to Rust Tokio runtime. In microbenchmarks, it provides 8-11x better performance than state-of-the-art designs. We have formally verified and optimized BWoS on weak memory models with a model-checking-based framework. Abstract Work stealing is a widely-used scheduling technique for parallel processing on multicore. Each core owns a queue of tasks and avoids idling by stealing tasks from other queues. Prior work mostly focuses on balancing workload among cores, disregarding whether stealing may adversely impact the owner’s performance or hinder synchronization optimizations. Real-world industrial runtimes for parallel processing heavily rely on work-stealing queues for scalability, and such queues can become bottlenecks to their performance. We present **BWoS**: Formally Verified Block-based Work Stealing for Parallel Processing Jiawei Wang\(^1,2,3\), Bohdan Trach\(^1,2\), Ming Fu\(^1,2,*\), Diogo Behrens\(^1,2\), Jonathan Schwender\(^1,2\), Yutao Liu\(^1,2\), Jitang Lei\(^1,2\), Viktor Vafeiadis\(^4\), Hermann Härtig\(^3\), and Haibo Chen\(^2,5\) \(^1\)Huawei Dresden Research Center \quad \(^2\)Huawei Central Software Institute \quad \(^3\)Technische Universität Dresden \quad \(^4\)Max Planck Institute for Software Systems \quad \(^5\)Shanghai Jiao Tong University 1 Introduction Many language runtimes and similar systems (e.g., JVM\(^{104}\), Go\(^{36}\), Rust’s Tokio\(^{38}\)) divide their work into smaller units called tasks, which are executed asynchronously on multiple cores and whose execution can generate further tasks. To achieve good performance, the task scheduler has to ensure a good workload distribution (preventing idle cores while there are pending tasks) with a low scheduling overhead. Achieving these goals, however, is non-trivial. Storing the tasks in a single queue shared by all cores achieves optimal workload distribution, but incurs a huge overhead due to contention. Using per-core task queues minimizes the overhead per operation, but can easily lead to a skewed workload distribution, with some cores remaining idle while others have queued work. Work stealing\(^{51}\) is a trade-off between these two extremes: each core owns a queue (owner) and acts as both the producer and the consumer of its own queue to put and get tasks. When a core completes its tasks and the queue is empty, it then steals another task from the queue of another processing core to avoid idling (thief). A number of stealing policies\(^{69,76,77,83,88,100}\) have been proposed to choose the proper queue (victim) to steal from, which can bring significant speedups depending on the use-cases. Due to these features, work stealing is widely used in parallel computing\(^{22,35,56,64,65,85,93,97}\), parallel garbage collection\(^{60,68,69,96,101}\), GPU environment\(^{52,54,99,102,103}\), programming language runtimes\(^{26,36,38,50,63,80,81}\), networking\(^{86}\) and real-time systems\(^{82}\). However, as parallel processing is applied to more workloads, current implementations of work stealing become a bottleneck, especially for small tasks. For example, web frame- works running over lightweight threading abstractions, such as Rust’s Tokio and Go’s goroutines, often contain many very small tasks, leading the Tokio runtime authors to observe that “the overhead from contending on the queue becomes significant” and even affects the end-to-end performance [28]. Similarly, high-performant garbage collectors, such as Java G1GC [13], rely on work stealing for parallelizing massive mark/sweep operations, which comprise only a few instructions. The work stealing overhead becomes a performance bottleneck for GC [68, 69, 96, 101]. As a third example, in Fig. 1, we profile the GoJson object decoding benchmark, which uses goroutines both for GC workers and for parsing complex objects. Only 51% of all CPU cycles constitute the useful workload (JSON decoding). The remaining cycles are spent on the runtime code, including 7% on lightweight thread scheduling, 20% on GC, 5% on kernel code idling the CPUs, etc. As both the scheduler and the GC code rely on work stealing, improving its performance can result in massive efficiency gains. Furthermore, the benefit is not limited to the above-mentioned scenarios, but expands to all fine-grained tasks parallel processing scenarios. Thus, we ask the following question: How can we improve performance of work-stealing queues for fine-grained tasks to the benefit of a large range of common applications? Existing work-stealing queues suffer from four main sources of inefficiency: P1: Synchronization overhead. Due to the possibility of a steal, local queues must use stronger atomic primitives (e.g., atomic compare-and-swap and memory barriers) than a purely sequential queue. Queues with a FIFO policy are generally implemented as single-producer multiple-consumer (SPMC) queues [8, 17, 39, 47], thereby treating steal similarly to get, and thus distributing the costs of stealing equally between owner and thieves. This also applies to the existing block-based queues [106], which lack any optimizations specific to the work-stealing use case to achieve high performance in the presence of thieves (§6.2, §7). Queues with a LIFO policy, such as the well-known and widely-used ABP queue [35, 48, 104], suffer from memory barrier overhead [83, 98] to avoid the conflict between the owner and thieves, even when they operate on different tasks. P2: Thief-induced cache misses. Since steals update the metadata shared between the owner and the thieves, they cause cache misses on subsequent accesses to the queue by its owner. This problem is especially apparent on unbalanced workloads, which feature high steal rates—for example, in the JVM Renaissance benchmarks [95], 10% of all items are stolen on average. Although strategies such as batching (e.g., steal-half [66]) can reduce the frequency of steals, they often cause overstealing which introduces additional overhead (§2.1.3). P3: Victim selection. To improve the workload distribution, advanced policies for selecting the victim queue to steal from require scanning the metadata of several queues, e.g., to find the longest queue. Doing so, however, causes contention for its owner and severely limits the improvement from advanced victim selection policies (§2.1.3). P4: Correctness under weak memory models (WMMs). Correctly implementing concurrent work-stealing queues on weak memory architectures, such as Arm servers for datacenters [46, 70], is very challenging because it requires additional memory barriers to prevent unwanted reordering. Using redundant barriers can greatly reduce the performance of work-stealing [79], while not including enough barriers can lead to errors, such as in the C11 [6] version of the popular unbounded Chase-Lev deque translated from formally verified ARMv7 assembly [90]. Even the popular Rust Tokio runtime required a fix to its implementation of work stealing [2]. Contribution. In response, we introduce BWoS, a block-based work stealing (BWoS) bounded queue design, which provides a practical solution to these problems, drastically reducing the scheduling overhead of work stealing. Our solution is based on the following insights. First, we split each per-core queue into multiple blocks with independent metadata and arrange for the owner and the thieves to synchronize at the block level. Therefore, in the common case where operations remain within a block, we can elide synchronization operations and achieve almost single-threaded performance (§3.2). Similarly, since a queue owner and the thieves share only block-local metadata, they do not interfere when operating on different blocks (§2.1). We can arrange for that to happen frequently by allowing stealing tasks from the middle of the queue. Second, we improve victim selection with a probabilistic policy, which approximates selecting the longest queue (§3.4), while avoiding the severe interference typical of the prior state-of-the-art (§2.1), to which we can integrate NUMA-awareness and batching. Finally, we ensure correctness under WMMs by verifying BWoS with the GenMC model checker [74, 75] and optimizing its choice of barriers with the VSX toolchain [92] (§5). As a result, BWoS offers huge performance improvements over the state-of-the-art (§6). In microbenchmarks, BWoS achieves up to 8-11x throughput over other algorithms. In representative real-world macrobenchmarks, BWoS improves performance of Java industrial applications by up to 25% when applied to Java G1GC, increases throughput by 12.3% with 6.74% lower latency and 60.9% lower CPU utilization for Rust Hyper HTTP server when applied to the Tokio runtime, and speeds up JSON processing by 25.8% on average across 9 different libraries when applied to the Go runtime. Returning to our motivating example (Fig. 1), applying BWoS to the Go runtime removes 29% of scheduling time, 55% of GC time, and 40% of CPU idle time, while increasing the CPU time ratio for useful work from 51% to 71%. Figure 2: Motivating benchmarks: (a) Sequential performance of state-of-the-art work stealing algorithms. (b,c) Performance of the ABP queue owner depending on the frequency of (b) steal and (c) getsize operations. (d) Hyper HTTP server performance with different stealing batch sizes with the original Tokio work stealing queue: $S$ is the victim queue size and $S/2$ refers to the default steal half policy [66]. (e,f) Interference between two threads for two sizes of cacheline sets. 2 Background Task processing. Tasks vary a lot among benchmarks. Their processing time ranges from a few nanoseconds (e.g., Java G1GC [13]), to microseconds (e.g., RPC [55, 73, 108]), and even to seconds (e.g., HPC tasks [35]). In this paper, we mainly focus on the nanosecond- and microsecond-scale tasks. Ignoring steals, tasks may be processed either: - in FIFO (first-in-first-out) order, when minimizing processing latency is important (e.g., network connections), or - in LIFO (last-in-first-out) order, when only the overall execution time matters, as is often the case with multithreaded fork-join programs [57]. We use the term queue to refer to the instances of work stealing data structures without implying a specific task ordering. Victim selection. There are multiple policies for selecting the victim queue to steal from. Random [51] chooses one of the remaining queues uniformly at random: it has the least complexity but achieves poor load balancing. Size-based policies (e.g., best of two [88] and best of many [69]) scan the queues’ size to improve the load balance by stealing from a large queue. The NUMA-aware policy [77] was proposed to optimize the remote communication cost, by tending to steal from the queues in the local cache domain. Batch-based policies (e.g., steal half [66] is used in Go and Rust’s Tokio runtimes) allow thieves to steal multiple tasks at once to reduce their interference with the owner. Later in this section ($\S2.1.3$), we will quantify these overhead sources to guide our queue design. 2.1 Performance Overhead Breakdown Next, we analyze the state-of-the-art work stealing algorithms to dissect their performance issues, and motivate the design decisions of BWoS. Fig. 2 contains our experimental results on an x86 server [71]. 2.1.1 Cost of Synchronization Operations As steals may happen at any time, strong atomic primitives are introduced for local queue manipulation. To quantify their cost, we first measure the throughput of the state-of-the-art work stealing algorithms on a sequential setup where an owner puts and gets data from its local queue, without any tasks ever being stolen ($\S6.2$). We compare the results with the theoretical performance upper bound: a single-threaded FIFO (FIFO_seq) or LIFO (LIFO_seq) queue implementation [72] without support for steals. Although there is no owner-thief interference, these synchronization operations pose a huge overhead (Fig. 2a): throughput of these work stealing algorithms is less than 0.25x for FIFO-based (0.19x for LIFO-based) compared to the upper bound. 2.1.2 Interference Cost with Thieves To estimate how thieves affect the throughput of the owner, we consider an ABP queue benchmark with an owner and one thief, which steals tasks from the queue with various frequencies (one queue and two threads in total). As the “ideal” baseline, we take the single-threaded performance of the ABP queue (i.e., with no steals). To account for any NUMA effects in this measurement, we use two configurations, running the thief in the same or in different NUMA nodes. As we can see in Fig. 2b, the thief significantly degrades the owner’s throughput: e.g., by stealing only 1% of the tasks, the owner’s throughput drops by 17.8% when the thief is in the same NUMA node, and by 25.2% when it runs in a different NUMA node. This degradation happens because of the cache interference between the owner and the thief on the shared metadata. We will further explain this in $\S2.2$. 2.1.3 Overhead due to Victim Selection There are two main sources of stealing overhead: first, a suboptimal victim selection can lead to workload imbalance trig- gering more stealing; second, the cost of steal operations. **Size-based policies.** Policies like **best of two** [88] or **best of many** [69] read global metadata of multiple queues (their length) to determine the victim. Somewhat surprisingly, as shown in Fig. 2c, these reads introduce significant overhead for the owner, especially in the cross-NUMA scenario: even with a `getsize` frequency of only 1%, the owner throughput drops by 34.4%. This is further amplified as `getsize` is called multiple times for a single steal. Therefore, for size-based policies, although reading more queues’ sizes (e.g., **best of many** [69]) can achieve better load balance, it inevitably induces more slowdown to the owners of these queues (§6). **NUMA-aware policies.** NUMA-aware policies [77] try to reduce the overhead of each steal by prioritizing the stealing from queues in the same NUMA node. We observe that although such NUMA-aware policies can reduce the overhead of steals by 56% in the case of our ABP queue benchmark, they fail to achieve their full potential. In Table 1, we break down the overhead of stealing in the ABP queue into its two main parts: the thief’s communication cost and the owner’s interference penalty. The former is 141ns when the thief and owner run on different NUMA nodes (measured by Intel MCA [21]), and reduces to 15ns (consistent with the L3 cache access latency [71]) when they are at the same NUMA node. The victim’s interference penalty is 170ns and 278ns for cases of thief and victim running on the same (\(I_t\)) and different (\(I_d\)) NUMA domains respectively. NUMA-aware policies with existing queues can typically eliminate the first communication overhead, while leaving the second interference overhead not sufficiently optimized. With long enough queues, steals could ideally happen at a different part of the queue and cause no interference to the victim. This would reduce \(I_t\) and \(I_d\) to zero, resulting in a 90% improvement due to NUMA-awareness (rather than 56%). **Batch-based policies.** Batch-based policies steal more tasks at once with the aim of reducing the frequency of steals. Indeed, in the Hyper HTTP server benchmark (see Fig. 2d), choosing larger batch sizes leads to a reduction in the number of steal operations. These larger steal operations, however, make the workload even less balanced (i.e. percentage of stolen tasks increases), which results in additional overhead (e.g., task ping-pong), canceling out the overhead reduction due to the fewer steals: the end-to-end throughput remains roughly the same. To better understand the effects of these types of cache contention, we conduct a simple microbenchmark with two threads; thread \(t_0\) continuously writes to a cacheline, while thread \(t_1\) either reads or writes to a cacheline with a specified frequency (Figs. 2e and 2f). The cachelines for \(t_0\) and \(t_1\) are independently and randomly chosen on each iteration out of the cacheline sets of two sizes: 1 or 64. In both cases, the cache contention on a single cacheline significantly harms the throughput of \(t_0\), regardless of the NUMA domain proximity. Introducing multiple cachelines (64 in this case) reduces the contention and significantly improves the throughput. Therefore, in the design of BWoS we separate the metadata. ### 3 Design **BWoS** is based on a conceptually simple idea: the queue’s storage is split into a number of blocks, and the global mutable metadata shared between thieves and owner is replaced with the per-block instances. The structure of BWoS queue facilitates abstracting the operations into **block advancement** that works across blocks, and **fast path** that operates inside of the block chosen by the block advancement (§3.1). Moving most of the synchronization from the fast path to the block advancement allows BWoS to fully reap the performance benefit indicated by our previous observation (§2.1.1) thus approaching the theoretical upper bound. `get` and `steal` always happen on different blocks. We carefully construct the algorithm such that thieves cannot obstruct the progress of `get`, while `get` can safely `takeover` a block from thieves operating on it without waiting for them. For complexity consideration, we don’t prohibit `put` and `steal` in the same block¹, as they can synchronize with the weak barriers without losing performance (§6.2). As metadata is also split per block, thieves and the owner are likely to operate on different blocks and thus update different metadata. As explained in §2.2, this reduces the interference between thieves and the owner. For FIFO-based BWoS, block-local metadata allows stealing from the middle of the queue, without enforcing the SPMC queue restriction of always stealing the oldest task, which is not required by the workloads. BWoS can benefit from NUMA-aware policies more than other queues because the reduction in interference for the victim makes both constituents of cross-NUMA-domain stealing overhead negligible (Table 1). Furthermore, unlike batch-based policies, stealing policies integrated with BWoS can focus on balancing the workload itself without worrying about the interference from frequently called `steal`. ¹Nevertheless, it is guaranteed automatically in LIFO BWoS. To better understand the block-based approach, let’s consider the put, get, and steal operations of the BWoS queue (Fig. 3). For each of these operations, the first step is to select a block to work on (lines 3, 14, and 27). The owner uses the top block for put and get for the LIFO BWoS, and gets from the front block and puts to the back block for the FIFO BWoS. In this case, top, back, and front block pointers are owner-exclusive metadata which is unavailable to the thieves. For steal, the choice of the block is more complicated and we will explain it in a later section (§3.4). After selecting the block, operations execute the fast path (lines 4, 15, and 28), which may return one of the three results: (1) The fast path succeeds, returning the value for get and steal. (2) The fast path fails because there is no data to consume (lines 18 and 31) or because a thief detects a conflict with other thieves or with the owner due to the takeover (line 33). In case of a conflict, the fast path is retried (line 34), otherwise null value is returned. (3) The margin (beginning or end) of the current block is reached (lines 7, 20, and 35). In this case, the operation tries to move to the next block by performing the block advancement, and retries if it succeeds, otherwise returns the empty or full queue status. Splitting the global metadata into block-level instances enables the operations into the fast path and block advancement, which increases the performance by keeping the fast path extremely lightweight. However, the lack of global metadata shared between owner and thieves raises additional challenges, which are mostly delegated to the block advancement—it is now responsible for maintaining complex block-level invariants. We introduce the following invariants: (1) put never overwrites unconsumed data; (2) steal and get never read the same data; (3) steal and get never read data that has been read before; (4) steal in progress cannot prevent get from reading from a thieves’ block. Before explaining fast path and block advancement implementations, we introduce two key concepts we rely on to ensure that the abovementioned invariants hold: block-level synchronization (§3.2) and round control (§3.3). ### 3.2 Block-level Synchronization Block-level synchronization is the key responsibility of the block advancement and ensures that thieves never steal from the block currently used for get operations. Each block is owned either by the owner or by the thieves. For example, in Fig. 4, blocks with lighter and darker colors belong to the owner and thieves respectively. The owner grants a block to the thieves, or takes a block back from them with block advancement. More specifically, for LIFO BWoS, get advances to the preceding block (3 to 2) and takes it over from thieves; put grants the current one and advances to the following block (3 to 4). For FIFO BWoS, get (resp. put) advances and takes over (resp. grants) the following block. The grant and takeover procedures are based on the thief index—an entry in the block metadata that indicates the stealing location inside the block. Takeover sets this index to the block margin with an atomic exchange, and uses the old value as the threshold between the owner and the ongoing thieves in this block. This ensures that owner is not blocked by thieves when it takes over the block. Moreover, concurrent owner and thieves never read the same data because the threshold between them is set atomically. Similarly, the grant procedure transfers the block to thieves by writing the threshold to the thief index. We will introduce the details in §4.2.1. ### 3.3 Round Control Each block also records round numbers of the last data access. When advancing block, the current block’s round is copied over to the next block; except in the case of a wrap-around, where the block number is increased by 1 (Fig. 5). In fact, there are producer, consumer, and thief round numbers in each block. When the producer tries to write round \( r \)'s data into a block, the consumer and thieves must have finished reading all data with round \( r - 1 \) from that block; so that the producer never overwrites any unread data. Similarly, when the consumer or a thief tries to read round \( r \)'s data from a block, the producer's round at that block must already be \( r \); this prevents reading any data twice, or reading data that was never written. Details can be found in §4.2.2. ### 3.4 Probabilistic Stealing As discussed in §2.1.3, size-based policies can achieve better load balance at the cost of degrading the performance of the owner of each queue. Calculating the size is even harder in our setting because the appropriate metadata is distributed across all blocks. However, BWoS brings an opportunity to have a new size-based, probabilistic stealing policy, which can provide strong load balance without adversely affecting the owner's performance. We ensure strong load balancing by making the probability of choosing a queue as a victim proportional to its size. We implement this approach with a two-phase algorithm: the \( P_{select} \) phase first selects a potential victim randomly, and then the \( P_{accept} \) phase decides whether to steal from it with probability \( S/C \), where \( S \) is the selected queue's size and \( C \) is its capacity; otherwise (with probability \( 1 - S/C \)) it returns to \( P_{select} \) for a new iteration. Therefore, given a pool of \( N \) queues each with the same capacity and a selector in \( P_{select} \) that selects each queue with equal probability, \( P_{accept} \) can guarantee that the probability of a thief stealing from a queue is proportional to its size. To minimize the impact on the owner’s performance, instead of measuring \( S \), we estimate \( S/C \) directly by sampling. The thief chooses a random block from all blocks of the queue and checks if it has data available for stealing, where the probability of returning true is close to \( S/C \). As the thief reads only one block’s metadata, its interference with the owner is minimal (cf. §2.2). For FIFO BWoS, the above approach can achieve zero-overhead for steals: after the estimation returns true, we can steal from the block used for estimation directly, as block-local metadata enables thieves to steal from any block which has been granted to thieves. We call this instance of applying our probabilistic stealing policy to FIFO BWoS a randomized stealing procedure. For LIFO BWoS, stealing still happens from the bottom block (Fig. 4). Thieves advance to the following block when they finish the current one. For FIFO BWoS, thieves do not advance block when randomized stealing is enabled, and fall back to the stealing policy for selection of the new queue and block instead (§3.4). In this case, the operation to advance to the next block on stealing (Fig. 3 line 36) becomes a no-op. Moreover, we can further combine the probabilistic stealing policy with a variety of selectors for \( P_{select} \) phase (e.g., from NUMA-aware policy), to benefit from both better workload balance and reduced stealing cost. Results show that the hybrid probabilistic NUMA-aware policy brings the best performance to BWoS (§6). ## 4 Implementation ### 4.1 Single-Block Operations (Fast Path) Let’s consider how put, get, and steal operations inside the block are implemented (lines 4, 15, and 28 in Fig. 3). Because get and steal always happen on different blocks, we only need to consider two cases of multiple operations in a block: producer-consumer and producer-thieves (Fig 6). To support these cases, each block has 4 metadata variables: entries which are ready for the consumer in the block are between the front position (\( f\_pos \)) and back position (\( b\_pos \)), while thieves use the stealing position (\( s\_pos \)) and a counter of finished steals in the block (\( s\_cnt \)) for coordinating among themselves and with the producer respectively. To produce a value, put first checks whether it reaches the block margin \( NE \) (number of entries), if not, writes the data into the producer position (\( b\_pos \)), and lets it point to the next entry. To consume a value, there are two get operations, \( get^f \) and \( get^b \), which correspond to the FIFO and LIFO BWoS respectively. get checks whether the block margin has been reached, or if the block has run out of data (\( f\_pos \) has reached \( b\_pos \)), if not, it reads the data and updates the consumer position variable in the block metadata. The two variants of get differ in which position variables and boundaries they use. \( get^f \) uses \( f\_pos \) as consumer position variable, \( NE \) as block margin, and \( b\_pos \) as boundary of valid data. \( get^b \) uses \( b\_pos \), zero position of the block, and \( f\_pos \) for the same purposes, respectively. Thieves follow a similar pattern: steal first checks if it has reached the block margin, or if the block has run out of data (\( s\_pos \) has reached \( b\_pos \)). Then, it updates \( s\_pos \) using an atomic compare-and-swap (CAS) to point to the next entry, reads the data, and finally updates \( s\_cnt \) with an atomic increment. If the CAS fails, steal returns conflict. (CAS is used because multiple thieves can operate in the same block.) All of these operations return block_done when they reach a block margin. Otherwise, if the block runs out of data, get and steal return empty. 4.2 Block Advancement In case a block margin is reached, put, get, and steal move to the next block: They first check whether advancing is permitted by the round control, and if so, they call takeover (by get) or grant (by put) procedures, and reset block-level metadata. 4.2.1 Takeover and Grant Procedures We explain the takeover and grant procedures using a queue with 4-entry blocks as an example (Fig. 7). **LIFO.** Let us assume that 6 elements (a-f) were put into the queue. Thus, the owner is in the block b1: b_pos in b0 and b1 becomes 4 and 2 respectively, while f_pos and s_pos remain at the initial value (0) (state ①). Then, two actions happen concurrently: two thieves try to steal entries, updating s_pos in b0 to 2, and start to copy out the data (steal on Fig. 7), while the owner gets 3 values, consuming f, e (state ②), and advancing to b0, thus starting the takeover. To perform the takeover, the owner atomically exchanges s_pos with the block margin (4), and then sets f_pos to the previous s_pos value (2) (state ③). After the takeover, the owner gets d and puts g. Meanwhile, one ongoing steal completes (steal’ on Fig. 7), increasing s_cnt by 1 (state ④). It does not matter which of the two completes first. When the owner puts new items h and i, it grants b0 to thieves and advances to b1. To perform the grant, it sets s_pos to the f_pos value (2), indicating to thieves that the block is available (state ⑤). After thieves steal all entries in b0, s_cnt reaches the block margin (state ⑥). Thus, b0 can be reused in the next round. **FIFO.** First, the producer puts 7 elements (a-g) into the queue. The producer and the consumer are in b1 and b0 respectively, and thieves can steal from b1 (state ①). Then, the consumer gets all elements in b0, and advances to b1 (state ②). This requires taking over b1 from thieves: for this purpose, it updates s_pos and f_pos in the same way as the LIFO BWoS, but also adds the difference between the new f_pos (2) and the block margin (i.e. length of the block) to s_cnt (state ③). This way, when all thieves finish their operation in b1, its s_cnt will be equal to the block margin. After that, the producer puts a new item h, and advances to b2 granting it to thieves (state ④). Finally, both thieves and the consumer have read all entries from b1, its f_pos and s_cnt are equal to the block margin (state ⑤). The producer uses this condition to check if the block can be reused for producing new values into it. 4.2.2 Round Control and Reset Procedure To implement round control (§3.3), the position variables in block metadata (f_pos, b_pos, s_pos, s_cnt) contain both the index or counter (idx field) as described in §4.2.1 and the round number (rnd field). We fit both components into a 64-bit variable that can be updated atomically. Consider, for example, the put operation of FIFO BWoS (Fig. 8). In put, when the producer idx reaches the block margin NE of the block blk (step ①), the new round x of the next block nbklk is calculated as described in §3.3 (step ②). When advancing to the block nbklk with the producer round x, the producer checks that the consumer and the thieves have finished reading all data from the previous round in nbklk by checking if their idx fields are equal to NE and their rnd fields are x-1 (step ③). When the check succeeds, the new value with the index 0 and the round x will be written into the producer position variable (step ④), thus resetting the block for the next round producing. Otherwise, a “queue full” condition is reported. The get operation of the FIFO BWoS is similar. To decide whether get can use a next block, it checks whether the block’s next consumer’s round is equal to the producer round (step ③), and resets the round and index fields if the check succeeds. Each operation resets only a subset of position variables (b_pos, f_pos, s_pos, s_cnt). We carefully select which variables each operation resets so that takeover and grant procedures by the owner have no write conflict with the reset done by thieves. 5 Verification and Optimization The complexity of the BWoS algorithm necessitates the use of formal verification techniques to ensure that there are no lurking design or implementation bugs, and to optimize the use on WMMs. One can easily imagine several tricky cases with block advancements. For example, for LIFO BWoS, when the owner calls puts and gets and advances to the next block, it may easily trigger ABA [67] bugs during the round control and takeover. Unlike simpler algorithms like ABP [79], it is virtually impossible to justify the correctness of an optimal memory barrier placement by inspection. Luckily, model checking tools [62, 74, 84, 92] are widely used to check the correctness of concurrent algorithms and optimize the memory barrier under WMMs automatically, improving both performance and developer confidence. For example, the Tokio library uses the model checker Loom [27, 91], which has helped them find more than 10 bugs [28]. 5.1 Verification Client A model checker takes as input a small verification client program that invokes queue operations. It verifies that all possible executions of the input program satisfy some generic correctness properties, such as memory safety and termination [45], as well as any algorithm-specific properties that are included in the verification client as assertions. Whenever verification fails, the model checker returns a concrete erroneous execution as a counterexample. To be able to generalize the verification result beyond the specific client program verified, the client program must trigger all possible contending scenarios and cover all desired properties. Because of the symmetry of BWoS (each owner operates on its own queue and steals from others), it suffices to verify the use of one queue owned by one thread and contended by several thief threads. **Verified properties.** We have verified the following properties with the GenMC model checker [74, 75]: - Memory safety: The program does not access uninitialized, unallocated or deallocated memory. - Data race freedom: there are no data races on variables that are marked non-atomic. - Consistency: Each element written by the producer is read only once by either the consumer or thieves. No data corruption or loss occurs. - Loop termination: Every unbounded spinloop and bounded fail-retry-loop in the program will eventually terminate even under weak memory models. **5.2 Results** We have optimized and verified the C code of LIFO and FIFO BWoS with the VSync framework and the GenMC model. checker. We have also verified the ABP queue using our verification client as a baseline. The statistics are shown in Table 2, broken down by memory barrier type: sequentially consistent (SEQ), acquire (ACQ), release (REL), and relaxed (RLX, i.e. plain memory accesses). For BWoS, barrier optimization and verification finished in about an hour on a 6-core workstation [59], with over 1 million execution explorations. For ABP, the checking finishes in 16 minutes. More executions are explored for ABP since thieves and owner synchronize for every operation, which brings more interleaving cases. Verification confidence. By adding one thread and discovering that no further barriers were required, we conclude that further increasing the thread count is unlikely to discover some missing barrier. Hence, we can avoid the state space explosion that happens with larger thread counts. On the other hand, discovering that an existing barrier had to be stronger would have forced us to review the algorithm in general. Experience. Model checking proved itself to be invaluable during BWoS's development. For example, an early version of LIFO BWoS had a bug where thieves would reset the s_pos variable when advancing to their following block (blk). In the case when the owner is advancing to its preceding block which also happens to be blk, it would update s_pos in the takeover procedure, which conflicts with the thieves’ reset procedure, resulting in data loss. This data loss was detected by GenMC with the verification client assertion (lines 30-31). We have fixed it by delegating the thieves’ s_pos reset procedure to the owner, thus removing this conflict. Optimization. For BWoS, most concurrent accesses are converted to relaxed barriers, with the few remaining cases being release or acquire barriers. For the owner’s fast path that determines the performance, we have only one release barrier in the FIFO BWoS. In contrast, the highly optimized ABP [79] contains many barriers. In particular, owner operations contain 2 sequentially consistent, 1 acquire, and 1 release barriers, which significantly degrade its performance. We note that these optimization results are optimal: relaxing any of these barriers produces a counterexample. To further increase our confidence in the verification result, we added another thief thread stealing one entry, and checked the optimized BWoS with GenMC. BWoS passes the check in 3 days with around 200 million execution explorations. Barrier analysis. LIFO BWoS does not contain any barriers in the fast path because the owner and the thieves do not synchronize within the same block. An acquire-release pair is related to s_pos in the owner’s slow path and thieves’ fast path that ensures the correctness of the takeover procedure. Another acquire-release pair is related to s_crt which ensures the owner doesn’t overwrite ongoing reading when it catches up with a thieves’ block (wraparound case). For FIFO BWoS, besides the above barriers, since producer and thieves need to synchronize within a block, an additional acquire-release pair in their fast path is required. 6 Evaluation Experimental setup. We perform all experiments on two x86 machines connected via 10Gbps Ethernet link, each with 88 hyperthreads (x86) [71], and one Arm machine with 96 cores (arm) [70]. The operating system is Ubuntu 20.04.4 LTS with Linux kernel version 5.7.0. 6.1 Block Size and Memory Overhead In comparison with other queues, BWoS has extra parameters that the user needs to choose when initializing a data structure, namely the block size and the number of blocks. In our experience with both micro- and macro-benchmarks, the system’s throughput remains mostly constant regardless of the block size or the number of blocks as long as they are above certain minimal values: 8 or more blocks in the queue and 64 or more elements in the block, both for our x86 and arm machines. The reason for this insensitivity to block size change is twofold: first, since a single thread is responsible for advancing the blocks of its own queue, the block size does not introduce any contention-related overhead. Larger block sizes cause the queue owner to advance the block less often, but after a certain block size, the overhead of advancing the block becomes negligible. Second, since BWoS forbids the owner and thieves consuming items in the same block with block-level synchronization, the contention of them on a queue is largely independent from the number of blocks. These insights guide the block size selection for our benchmarks: we set the number of blocks to 8 and calculated the block size based on the queue capacity. Therefore, selecting an appropriate block size is straightforward. Further fine-tuning of these parameters may be beneficial for extreme scenarios where memory-size constraints are present or the overly large block size becomes detrimental to stealing ($S$). BWoS contains three pointers for each queue, and four atomic variables, two pointers, and one boolean variable for each block as its metadata. The actual memory usage also includes cache padding added to prevent false sharing. The memory overhead from this metadata is static and thus negligible for most use-cases. 6.2 Microbenchmarks To verify our claims, we have designed a microbenchmark which supports both LIFO and FIFO work stealing and compared BWoS with the state-of-the-art algorithms: an off-the-shelf ABP [48] implementation from Taskflow v3.4.0 [35] with barrier optimization [79] (abp), the block-based bounded queue [106] (bbq), work stealing queues from Tokio v1.17.0 [38] (tokioq), Go’s runtime v1.18 [36] 6.2.1 Queue without Stealing Overall performance. Figures 10 and 11 for stolen percentage equal to 0 show the performance of the queue without stealing. BWoS outperforms other algorithms by a significant margin. For example, LIFO BWoS (bwsos_opt) has 4.55x higher throughput than ABP (abp_opt) on x86, and FIFO BWoS written in C/C++, Rust, Go, and Kotlin outperform bbq in C, eigenq in C++, tokioq in Rust, goroutineq in Kotlin by 8.9x, 10.15x, 3.55x, 1.61x, and 1.82x accordingly. Impact of the memory barrier optimization. abp and LIFO BWoS get 1.65x and 5.39x speedup on x86, and 2.03x and 3.38x speedup on arm respectively due to the memory barrier optimization. We observe similar results for FIFO work stealing algorithms. The much greater speedup of BWoS compared to ABP is possible in particular due to the separation of fast path and block advancement, where most of the barriers in the fast path become relaxed. Effectiveness of the block-level synchronization. Results show that on x86 LIFO and FIFO BWoS are only 10.7% and 5.4% slower than ideal, respectively. On arm the results are similar. Thus, block-level synchronization allows BWoS to approach the theoretical upper bound by removing the consumer-thief synchronization from the fast path. 6.2.2 Queue with Stealing Overall performance. As the stolen percentage increases, BWoS continues to outperform other work-stealing algorithms. For example, with 10% stolen percentage, LIFO BWoS outperforms abp by 12.59x, while FIFO BWoS outperforms bbq, eigenq, tokioq, goroutineq, coroutineq by 11.2x, 30.1x, 9.41x, 2.78x, and 1.64x respectively. Effectiveness of the block-based approach. Unlike other algorithms, BWoS suffers only a minor performance drop as the stolen percentage increases. For example, for 20% stolen percentage, the throughput of LIFO and FIFO BWoS drops only by 0.53% and 9.35%, while for abp_opt, tokioq and goroutineq it degrades by 71.9%, 80.2%, and 59.3% respectively. Note that the BBQ concurrent FIFO queue [106], which is also a block-based design, does not reach performance comparable to BWoS, stressing the importance of our design decisions for the work stealing workloads. 6.2.3 Pool with Different Stealing Policies Stealing policies. We perform this experiment with 6 stealing policies, namely the random choice policy (rand), a policy that chooses the victim based on a static configuration (seq), a policy that chooses the last selected one as the victim [104] (last), best of two (best_of_two), best of many (best_of_many), and NUMA-aware policy (numa). For best_of_many we choose best of half (i.e. best of four). Overall performance. In this experiment, we compare BWoS only with the second-best algorithm from the previous experiments: abp and tokioq for LIFO and FIFO work stealing respectively. Fig. 12 shows that BWoS performs consistently better than other algorithms. When the balancing factor is 0%, BWoS outperforms abp by 4.69x and tokioq by 2.68x. As the 2The thief thread is located in the same L3 cache group as the owner; the results are similar when putting the thief thread elsewhere. balancing factor increases, the throughput of BWoS variants is 7.90x higher than of abp and 6.45x higher than of tokioq. **Impact of the NUMA-aware policy.** LIFO and FIFO BWoS with numa policy outperform BWoS with other policies by at most 2.21x and 1.73x respectively. For other work stealing algorithms, best_of_two brings the best performance. Thus, BWoS benefits from numa policy while other algorithms do not. On the other hand, in many cases best_of_many brings the worst performance, proving that interference with the owner can outweigh its improvements to the load balance. **Effectiveness of the probabilistic stealing.** BWoS can additionally benefit from the probabilistic stealing. When the balancing factor is 100%, numa with probabilistic stealing (bwos+numa+prob) brings 1.34x, 1.53x performance improvement on average to LIFO and FIFO BWoS. ### 6.3 Macrobenchmarks #### 6.3.1 Java G1GC We replace the task queue [24] in Java 19 HotSpot [37] with LIFO BWoS, and run the Renaissance benchmark suite v0.14.0 [33], which consists of 25 modern, real-world, and concurrent benchmarks [95] designed for testing and optimizing garbage collectors. Two database benchmarks are omitted since they don’t support JDK 19. JVM enables -XX:+DisableExplicitGC [30,68] and -XX:+UseG1GC flags when running the benchmark. All other parameters (e.g., number of GC threads, VM memory limit) are default. We run 10 iterations for each benchmark with the modified and the original JVM, and measure the end-to-end program run time via the Renaissance testing framework. Figure 13 shows the speedup of all 23 benchmarks on x86. When BWoS is enabled, 17 of them get performance improvement. The average speedup of all benchmarks is 3.55% and the maximum speedup is 25.3%. The applications that benefit more from concurrent GC also get greater speedup from BWoS. Results on arm are similar where the average speedup is 5.20%, 18 benchmarks are improved and the maximum speedup is 17.2%. On the other hand, several Renaissance benchmarks did not get any performance improvement from using BWoS. We have investigated this issue by running JVM with flags -Xlog:gc+cpu and -Xlog:gc+heap+exit to collect GC-related statistics. These experiments have shown that applications that trigger GC often demonstrate improvement from BWoS, while applications that don’t trigger GC or triggered it only rarely (e.g. at JVM exit) see no speedup. For the benchmarks which never or seldomly trigger the GC, the slowdown is most likely due to the longer queue initialization. #### 6.3.2 Rust Tokio Runtime We replace the run queue [39] in Tokio v1.17.0 [38] with FIFO BWoS, and run Hyper HTTP server v0.14.18 [20] and Tonic gRPC server v0.6.2 [40] with the modified runtime. Tokio runtime (also Go runtime) provides a batch stealing interface. Based on observations from benchmarks similar to Fig. 2d, we configured the thief of BWoS to steal all available entries from its block at once. Benchmarks are performed on two x86 machines, one running the server, the other running the HTTP benchmarking tool wrk v4.2.0 [43] or the gRPC benchmarking and load testing tool ghz v0.017 [14]. All parameters of Hyper and Tonic are default. Each benchmark runs 100 seconds and Figure 16: Request throughput, average latency, and task stolen percentage comparison results of 5 Rust web frameworks of BWoS (normalized to the original algorithm results) with rust-web-benchmarks workload on x86. has 10 iterations. The latency and throughput are measured by wrk or ghz, while the CPU utilization of the server is collected through the Python psutil library [32]. wrk and ghz run the echo workload and SayHello protocol respectively and are configured to utilize all hyperthreads of their machine. Figure 14 shows the throughput-latency and throughput-CPU utilization results of Hyper with different connection numbers (100, 200, 500, 1k, 2k, 5k, and 10k). Before the system is overloaded, BWoS provides 1.14 × 10^6 op/s throughput while dropping 60.4% CPU usage with similar latency, the original algorithm provides only 9.44 × 10^5 op/s throughput. With 1k connections, BWoS increases throughput by 12.3% with 6.74% lower latency and 60.9% lower CPU utilization. Figure 15 shows the throughput and latency results of Tonic. Using BWoS increases throughput by 32.9%, with 32.8% lower average latency and 36.6% lower P95 latency. To prove the generality of BWoS when applied to web frameworks, we also benchmark another 5 popular Rust web frameworks [4, 31, 34, 41, 42] that used Tokio runtime with rust-web-benchmarks [5] workload on x86 (Fig. 16). Results show that BWoS increases the throughput by 82.7% while dropping 45.1% of average latency. In addition, the task stolen percentage drops from 69.0% to 49.2%. We have made our implementation for the Tokio runtime available to the open-source community [3]. ### 6.3.3 Go Runtime We replace the runqueue [17] in the Go programming language [36] v1.18.0 runtime with BWoS and benchmark 9 JSON libraries [1, 9–12, 15, 18, 19, 25]. The benchmark suite [16] comes from the go-json library and runs 3 iterations with default parameters. We record the latency of each operation (e.g., encoding/decoding small/medium/large JSON objects) reported by the benchmark suite, and calculate the speedup. As shown in Fig. 17, when BWoS is enabled, operations get 25.8% average performance improvement on x86. arm produces similar results with 28.2% speedup on average. In general, encoding operations have better speedup compared to decoding operations. We observe no improvement for encoding booleans and integers. ### 7 Related Work **Block-based queues.** Wang et al. proposed a block-based bounded queued BBQ [106] (BBQ) that splits the buffer into multiple blocks, thus reducing the producer-consumer interference. BWoS differs from BBQ in the following ways: (1) although BBQ also applies metadata separation, the producer-consumer interference it reduces is not an issue for work stealing as these always execute on the same core. By introducing block-level synchronization, steal-from-middle property, and randomized stealing. FIFO BWoS outperforms BBQ by a large margin (§6). (2) For the round control in BWoS, the new round of a block is determined only by the round of its adjacent block instead of relying on global metadata, as the version mechanism in BBQ does. This design simplifies the round updating and reduces its overhead. **Owner-thief interference and synchronization costs.** Attiya et al. proved that work stealing in general requires strong synchronization between the owner and thieves [49]. BWoS overcomes this issue by delegating this synchronization to the block advancement, thus removing it from the fast path. Acar et al. used a sequential deque with message passing to remove the owner’s barrier overhead [44]. However, this design relies on explicit owner-thief communication, thus the steal operation cannot run to completion in parallel with the owner’s operations. Dijk et al. proposed a deque-based LIFO work-stealing algorithm which splits the deque into owner and thief parts, thus reducing the owner’s memory fences when they do not reach the queue split point [61]. However, the entries read by thieves cannot be reused until the whole deque is empty. Horie et al. proposed a similar idea, where each owner has a public queue that is accessible from other threads and a private queue that is only accessible by itself [68]. However, it requires more effort to deal with load balancing, e.g., introducing global statistics metadata which causes more cache misses for the owner. In contrast, BWoS reduces the interference using techniques of block-level synchronization, and probabilistic and randomized stealing. Morrison et al. introduced work stealing algorithms which rely on the bounded TSO microarchitectural model, which x86 and SPARC CPUs were shown to possess [89]. Michael et al. reduced the thief-owner synchronization by allowing them to read the same task [87], which requires reengineering of tasks to be idempotent. BWoS exhibits correct and efficient execution on a wide range of CPU architectures without any additional requirements. **Stealing policies.** Yang et al. gave a survey of scheduling parallel computations by work stealing [107]. Kumar et al. benchmarked and analyzed variations of stealing policies [76]. Mitzenmacher proposed to give the thief two choices for selecting the victim to have a better load balancing [88]. Most of the analyzed policies are size-based, and thus aim to reach the same goal as our probabilistic stealing policy—namely, better load balance. Hendler et al. allow thieves to steal half of the items in a given queue once to reduce interference [66]. BWoS supports batched stealing, but the maximum amount of data that can be stolen atomically is a block. However, the stealing policy can be configured to steal more than one block. Kumar et al. proposed a NUMA-aware policy for work stealing [77]. This policy is fully orthogonal to BWoS and can be combined with its probabilistic stealing policy. **Formally verified work stealing.** Lê et al. [79] manually verified and optimized the memory barriers of Chase-Lev dequeue [53] on WMMs. Unlike the verification of BWoS which relies on model checking, manual verification is a high-effort undertaking. In the context of concurrent queues, Meta’s FollyQ was verified using interactive theorem prover [105]. While this approach provides the highest levels of confidence in the design, it works only with sequentially consistent memory model, and is also a high-effort endeavor. Recently, GenMC authors have verified the ABP queue as part of evaluation of their model checker [74]. The authors of BBQ have relied on VSync to simultaneously verify and optimize the barrier for weak memory models [106]. BWoS also uses VSync for this purpose, but instead of many hand-crafted tests, which exercise the individual corner cases in BBQ, we create one comprehensive client that covers several corner cases and their interactions at once. We further verify the optimization results by adding one more thief into the verification client and checking it with GenMC. **8 Conclusion** To conclude, we explore two of our learnings from this work. The benefit of the block-based design is manifold. First, by replacing the global mutable metadata with block-level metadata, it is possible to eliminate the interference between the owner and the thieves that operate on different blocks. Second, by ensuring exclusive access to a block for owner’s get operation through block-level synchronization, it is possible to relax most of the barriers from the operation’s fast path, increasing its performance up to the theoretical upper bound. Although being unnecessary in our current algorithm, a third benefit is the verification modularity given by the block-based design, e.g., allowing the verification of blocks and their composition in separate steps. Finally, the block-based design opens possibilities for holistic optimization of the data structure use, as we do with our probabilistic stealing policy. BWoS can also be applied to GPU and hybrid CPU-GPU computations, as well as in HPC schedulers, where work stealing is common. We plan to explore this direction in the future. More generally, the BWoS design can be applied to other use cases, where the data structure is mostly accessed by a single thread, and only rarely by multiple. In this case, the decisions demonstrated in BWoS can act as design and implementation guidelines. **Verified software can be faster than unverified software.** The more hardware details and tweaks are mirrored in the software, the more complex and opaque that piece of code becomes. The interaction of this complexity with concurrency and weak memory consistency is a major challenge. We believe that practical verification tools (i.e., tools applied to increase confidence in correctness) are a key enabling in the development of efficient, and inevitably complex, concurrent software such as BWoS. **Future Work** There are several directions for further work: We plan to contribute BWoS to more open-source projects, e.g., openJDK [23, 29], and Golang, as well as investigate how to use BWoS in HPC runtimes. We also plan to better explore the performance trade-offs for BWoS: if the number of outstanding work items is smaller than the block size, BWoS can prevent stealing and thus limit the achieved parallelism. Furthermore, if the queue capacity has to be very small (due to space requirements), it may be necessary to reduce the block size and thus incur more block advancement that leads to performance drop. These situations would benefit from more exploration in the system design. In other cases, BWoS is expected to outperform existing state-of-the-art work-stealing algorithms due to its implementation of several performance-enhancing techniques. **Acknowledgments** We thank our shepherd Phillip Gibbons and the anonymous reviewers for their insightful comments. References [27] Loom: Permutation testing for concurrent code. https://docs.rs/crate/loom/0.2.4. [29] OpenJDK. https://openjdk.org/. with private deques. In work for warp speeds. Symposium on Principles of Programming Languages, SIGPLAN symposium on Principles and practice of T in concurrent algorithms cannot be eliminated. In processors. national Parallel and Distributed Processing Symposium, (IPDPS 2017) consumer concurrent FIFO queue. In sor – Enabling the best price performance in Amazon https://github.com/viz-rs/viz wrapped, a super-easy, composable, web server framework for warp speeds. https://github.com/seamonster/warp wrk: Modern HTTP benchmarking tool - GitHub. https://github.com/wg/wrk [102] Toss, J. Work stealing inside GPUs. [103] Tzeng, S., Patney, A., and Owens, J. D. Task management for irregular-parallel workloads on the GPU.
{"Source-Url": "https://people.mpi-sws.org/~viktor/papers/osdi2023-bwos.pdf", "len_cl100k_base": 13112, "olmocr-version": "0.1.48", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 74430, "total-output-tokens": 19787, "length": "2e13", "weborganizer": {"__label__adult": 0.0004031658172607422, "__label__art_design": 0.0005269050598144531, "__label__crime_law": 0.0004062652587890625, "__label__education_jobs": 0.0006060600280761719, "__label__entertainment": 0.00013840198516845703, "__label__fashion_beauty": 0.00019252300262451172, "__label__finance_business": 0.0003273487091064453, "__label__food_dining": 0.0003972053527832031, "__label__games": 0.001346588134765625, "__label__hardware": 0.0028934478759765625, "__label__health": 0.0004787445068359375, "__label__history": 0.0004940032958984375, "__label__home_hobbies": 0.00013911724090576172, "__label__industrial": 0.0008435249328613281, "__label__literature": 0.0002696514129638672, "__label__politics": 0.0004374980926513672, "__label__religion": 0.000690460205078125, "__label__science_tech": 0.1285400390625, "__label__social_life": 7.94529914855957e-05, "__label__software": 0.01123046875, "__label__software_dev": 0.84814453125, "__label__sports_fitness": 0.0003788471221923828, "__label__transportation": 0.0007987022399902344, "__label__travel": 0.0002758502960205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74850, 0.03963]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74850, 0.22381]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74850, 0.88067]], "google_gemma-3-12b-it_contains_pii": [[0, 4356, false], [4356, 10283, null], [10283, 14435, null], [14435, 19734, null], [19734, 23575, null], [23575, 29207, null], [29207, 33475, null], [33475, 35803, null], [35803, 41458, null], [41458, 44570, null], [44570, 47817, null], [47817, 53269, null], [53269, 57646, null], [57646, 60971, null], [60971, 64979, null], [64979, 68886, null], [68886, 73240, null], [73240, 74850, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4356, true], [4356, 10283, null], [10283, 14435, null], [14435, 19734, null], [19734, 23575, null], [23575, 29207, null], [29207, 33475, null], [33475, 35803, null], [35803, 41458, null], [41458, 44570, null], [44570, 47817, null], [47817, 53269, null], [53269, 57646, null], [57646, 60971, null], [60971, 64979, null], [64979, 68886, null], [68886, 73240, null], [73240, 74850, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74850, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74850, null]], "pdf_page_numbers": [[0, 4356, 1], [4356, 10283, 2], [10283, 14435, 3], [14435, 19734, 4], [19734, 23575, 5], [23575, 29207, 6], [29207, 33475, 7], [33475, 35803, 8], [35803, 41458, 9], [41458, 44570, 10], [44570, 47817, 11], [47817, 53269, 12], [53269, 57646, 13], [57646, 60971, 14], [60971, 64979, 15], [64979, 68886, 16], [68886, 73240, 17], [73240, 74850, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74850, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
7044d15f0e4f8725f8a678e11586a3d588a66583
Bimodal Modelling of Source Code and Natural Language Citation for published version: Link: Link to publication record in Edinburgh Research Explorer Document Version: Publisher's PDF, also known as Version of record Published In: General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Abstract We consider the problem of building probabilistic models that jointly model short natural language utterances and source code snippets. The aim is to bring together recent work on statistical modelling of source code and work on bimodal models of images and natural language. The resulting models are useful for a variety of tasks that involve natural language and source code. We demonstrate their performance on two retrieval tasks: retrieving source code snippets given a natural language query, and retrieving natural language descriptions given a source code query (i.e., source code captioning). Experiments show there to be promise in this direction, and that modelling the structure of source code improves performance. 1. Introduction Software plays a central role in society, touching billions of lives on a daily basis. Writing and maintaining software — in the form of source code — is a core activity of software engineers, who aim to provide reliable and functional software. However, writing and maintaining source code is a costly business; software developers need to constantly look at documentation and online resources, and they need to make sense of large existing code bases. Both of these can be challenging and slow down the development process. This need motivates our work here, where we seek to build joint models of natural language and snippets of source code. Advances in joint models of these two modalities could lead to tools that make writing and understanding software significantly faster and easier. Our approach combines two lines of work. First, Hindle et al. (2012); Maddison & Tarlow (2014); Raychev et al. (2015); Tu et al. (2014) have built increasingly sophisticated statistical models of source code. Second, in machine learning and computer vision there have been rapid recent advances in bimodal models that map between images and natural language (Srivastava & Salakhutdinov, 2012; Kiros et al., 2013; Socher et al., 2014; Fang et al., 2014; Vinyals et al., 2014; Karpathy & Fei-Fei, 2014). Can we take inspiration from these recent works and build models that map from natural language to source code, and from source code to natural language? We explore the problem in this work, building a model that allows mapping in both directions. We leverage data that has short natural language utterances paired with source code snippets, like titles of questions along with source code found in answers from StackOverflow.com. We make three main contributions: (1) combining ideas from the two lines of work mentioned above, showing that this direction has promise going forward; (2) describing modelling and learning challenges that arose in the process of building these models and giving solutions that allowed us to overcome the challenges; and (3) showing how the performance of the models are affected by increasingly difficult instances of the problem. Results on the retrieval task show that we can often discern the proper natural language description for previously unseen source code snippets from a reasonably sized set of candidate descriptions. 2. Preliminaries Let $L$ be a sequence of words in natural language and $C$ be a source code snippet. The first high level choice is whether to formulate a model where code is conditional upon language (i.e., $P(C \mid L)$), language is conditional upon code (i.e., $P(L \mid C)$), or perhaps use an undirected model. While any of these would be possible, we decided to define the model in terms of $P(C \mid L)$ because it leads to the more natural way of encoding known structure of source code. The next high level decision is how to represent source code. In its most basic form, source code is simply a string. However, programming languages are designed such that two transformations can be done unambiguously: converting an unstructured string into a sequence of tokens (lexing), and converting the sequence of tokens into a parse tree (parsing). Moreover, these operations can be performed easily using modern compilers. We choose to take advantage of the parse tree structure and follow previous works like Maddison & Tarlow (2014) in formalizing the model over source code. From hereon, when we refer to a source code snippet \( \mathcal{C} \), we mean its parse tree structure. As an example, a source code snippet and its associated parse tree are shown in Figure 1 and Figure 2 respectively. The leaf nodes (gray) of the tree are tokens (i.e., strings that appeared in the original source code text). The internal nodes are specific to the programming language and correspond to expressions, statements or other high level syntactic elements such as ForStatement and Expression. These node types are specified by the programming language designers, and parsers allow us to map from a raw string to these trees. **Notation.** We let \( \mathcal{I} \) be the set of internal nodetypes (nonterminals) and \( \mathcal{K} \) be the set of tokens (terminals) that appear across all snippets in our dataset. A parse tree \( \mathcal{C} = (\mathcal{N}, \text{ch}, \text{val}) \) is a triple made up of nodes \( \mathcal{N} = \{1, \ldots, N\} \), a children function \( \text{ch} : \mathcal{N} \to \mathcal{N}^* \) that maps parent nodes to tuples of children nodes, and a value function \( \text{val} : \mathcal{N} \to \mathcal{I} \cup \mathcal{K} \) that maps nodes to an internal node type or a token. We use the convention that \( \text{ch}(n) = \emptyset \) means \( n \) is a leaf node. For convenience, we will also overload notation and define tuple operations \( \text{ch}((n_1, \ldots, n_K)) = (\text{ch}(n_1), \ldots, \text{ch}(n_K)) \) and \( \mathbf{v} = \text{val}((n_1, \ldots, n_K)) = (\text{val}(n_1), \ldots, \text{val}(n_K)) \). Nodes are indexed according to when they would be instantiated during a left-to-right depth first traversal of the tree. For example, if \( a \) is the root, \( \text{ch}(a) = (b, c) \), \( \text{ch}(b) = (d) \), and \( \text{ch}(c) = (e) \), then the nodes would be labeled as \( a = 1, b = 2, c = 3, d = 4, e = 5 \). Finally, we also define partial parse trees \( \mathcal{C}_{\leq n} \) to be equal to \( \mathcal{C} \) but with the nodes restricted to be \( \mathcal{N} = \{1, \ldots, n\} \), and for any node \( n' \) such that \( \text{ch}(n') \) contains a node with index \( > n \), \( \text{ch}(n') \) is set to \( \emptyset \). **Model Overview.** We model a parse tree with a directed model that sequentially generates a child tuple for node \( n \) conditional upon the natural language input \( \mathcal{L} \) and the partial tree \( \mathcal{C}_{\leq n} \): \[ P(\mathcal{C} | \mathcal{L}) = \prod_{n \in \mathcal{N}:\text{ch}(n) \neq \emptyset} P(\text{val}(\text{ch}(n)) | \mathcal{L}, \mathcal{C}_{\leq n}). \] In a bit more detail, we define \( \text{supp}(i) = \{v : v = \text{val}(\text{ch}(n)) \land \text{val}(n) = i \text{ for some } n \text{ in dataset} \} \) to be the set of all children tuples that appear as the children of a node of type \( i \) in our dataset (the “empirical support”). To define our models, we will construct scoring functions \( s_\theta(\mathbf{v}, \mathcal{L}, \mathcal{C}_{\leq n}) \) that can be converted to probabilities by exponentiating and normalizing over the support of the parent node type: \[ P(\mathbf{v} | \mathcal{L}, \mathcal{C}_{\leq n}) = \frac{\exp s_\theta(\mathbf{v}, \mathcal{L}, \mathcal{C}_{\leq n})}{\sum_{\mathbf{v}' \in \text{supp}(\text{val}(n))} \exp s_\theta(\mathbf{v}', \mathcal{L}, \mathcal{C}_{\leq n})}. \] where \( \theta \) are parameters to be learned from training data. This formulation gives quite a bit of flexibility over the range of models depending on how exactly the generation of the next children tuple depends on the natural language and previous partial tree. For example, if we were to define \( s_\theta(\mathbf{v}, \mathcal{L}, \mathcal{C}_{\leq n}) = \log \text{count}(\mathbf{v}, \text{val}(n)) \), where \( \text{count} \) is the number of times that \( \mathbf{v} \) has appeared as a child of a node of type \( \text{val}(n) \), then this is a probabilistic context free grammar (PCFG). Following previous work on modelling source code, we explore models with richer dependency structure. To sample from these models at test time, we can incrementally construct a tree by fixing the root node, sampling a children tuple conditional upon the initial partial tree (just the root node), then recursing to the left-most child node such that \( \text{val}(n) \in \mathcal{I} \), updating the partial tree, and repeating until children tuples have been chosen for all nonterminals. ### 3. Joint Code and Natural Language Models We now turn our attention to the modelling specifics. Within all models we adhere to the structure described in the previous section. The variation in models will come from three sections. The variation in models will come from three variations over how to represent the partial trees \( \mathcal{C}_{\leq n} \); and how to combine the above representations. As is now common (e.g., Kiros et al. (2013)), we focus on learning fixed-length real-valued vector representations for each component of the model. There are three classes of representation vector: natural language vector \( l \in \mathbb{R}^D \) computed from a given \( L \), partial tree vector \( c \in \mathbb{R}^D \) computed from a given \( C \leq n \), and production vector \( r \in \mathbb{R}^D \) that is unique to each parent-child (\( i, v \)) pair. Finally, there are production-specific biases \( b_{i,v} \). 3.1. Combining Representations We experimented with two ways of mapping from representation vectors to scores functions. The first is an additive model where \( s(v, L, C \leq n) = (l + c)^\top r + b \), and the second is a multiplicative model where \( s(v, L, C \leq n) = (l \odot c)^\top r + b \), where \( \odot \) denotes elementwise multiplication. The multiplicative model is similar to the multiplicative compositional models of Mitchell & Lapata (2010) and a special case of the factored model of Kiros et al. (2013). We have found reason to prefer the multiplicative interactions in the natural language-to-source code domain. A detailed explanation appears in Sec. 4.2 when we discuss results on a simple synthetic data set. 3.2. Natural Language Representations Here we focus on how to map \( L \) to \( l \). Our approach is to use a bag of words (BoW) assumption, learning a distributed representation for each word, then combining them as a simple average. More specifically, in the BoW model, we let each natural language word \( w \) have a \( D \)-dimensional representation vector \( l_w \) that is shared across all appearances of \( w \), then we let \( l \) be the average of the representation vectors for words in \( L \); i.e., \( l = \frac{1}{|L|} \sum_{w \in L} l_w \). While this representation may appear simplistic, we believe it to be reasonable when natural language takes the form of short search query-like utterances, which is the case in much of our data. In future work, as we turn towards learning from longer natural language utterances, we would like to experiment with alternative models like LSTMs (Hochreiter & Schmidhuber, 1997; Sutskever et al., 2014). 3.3. Partial Tree Representations The final component of the model is how to represent decisions that have been made so far in constructing the tree. This involves extracting features of the partial tree that we believe to be relevant to the prediction of the next children tuple. The two main features that we use are based on the 10 previous tokens that have been generated (note that due to the definition of \( C \leq n \), tokens are generated in the order that they appear in the input source code text), and the 10 previous internal node types that are encountered by following the path from node \( n \) to the root. In both cases, padding symbols are added as necessary. Feature values (e.g., \( \text{int} \), \( \text{i} \), \( = \)) are shared across different feature positions (e.g., previous token, two tokens back, three tokens back). For each feature value \( \phi \), there is a representation vector \( c_{\phi} \). The \( c_{\phi} \) vectors depend only on the feature value, so in order to preserve the (position, value) pairs and not just the set of values, a different context matrix \( H_j \) is needed for each feature position \( j \). We then modulate the feature vectors by a position-specific diagonal context matrix to get the \( c \) vector: \( c = \sum_{j=1}^{J} H_j c_{\phi_j} \). 3.4. Learning At training time, we observe \( (L, C) \) pairs. Our goal is to learn parameters (representation vectors and context matrices) so as to maximize the probability \( P(C | L) \). All partial trees of \( C \) can be constructed easily, so training amounts to maximizing the sum of log probabilities of each production given the associated natural language and partial tree up to the current prediction point. We approximate the objective using noise contrastive estimation (NCE) (Gutmann & Hyvärinen, 2012; Mnih & Teh, 2012), which eliminates the need to compute expensive normalizing constants. Letting \( k \) be a parameter of the NCE training and \( \Delta s(v, L, C \leq n) = s_\theta(v, L, C \leq n) - \log(k P_{\text{noise}}(v \mid \text{val}(n))) \), the objective can be written as in Mnih & Kavukcuoglu (2013): \begin{align} E_{(L,C \leq n, v) \sim D} \left[ \log \Delta s(v, L, C \leq n) \right] &+ k E_{(L,C \leq n, v') \sim \text{noise}} \left[ \log(1 - \Delta s(v', L, C \leq n)) \right], \tag{3} \end{align} where \( D \) is the data distribution, and \( \text{noise} \) is the distribution where \( (L, C \leq n) \) pairs are sampled from the data distribution then child tuple \( v' \) is drawn from the noise distribution, which can be conditional upon \( L \) and \( C \leq n \). Our noise distribution \( P_{\text{noise}}(v \mid i) \) is the posterior PCFG of the training data with a simple Dirichlet prior (so it only depends on the partial tree \( C \leq n \)). For optimization, we use AdaGrad (Duchi et al., 2011). We initialize the biases \( b_{i,v} \) to the noise PCFG distribution such that \( b_{i,v} = \log P_{\text{noise}}(v \mid i) \). The rest of the representations are initialized randomly around a central number with some small additive noise. \( L \) components are initialized with center 0, \( c_{\phi} \) components centered at 1 when using the multiplicative model or centered at 0 for the additive model and the diagonals of \( H \), at \( \frac{1}{2} \). 4. Evaluation In this section, we evaluate the bimodal source code language model, using natural language descriptions and C# code. We use Roslyn (.NET Compiler Platform) to parse C# source code snippets into parse trees. For each of the evaluation datasets, we create three distinct sets: the trainset that contains 70% of the code snippets, the test1 set that contains the same snippets as the trainset but novel natural language queries (if any) and the test2 set that contains the remaining 30% of the snippets with their associated natural language queries. Each snippet is described by a constant number of queries, uniformly sampled with replacement from all the queries describing it. The rationale for this choice is to avoid balancing training and evaluation against code snippets that are expressed with disproportionally more natural language queries than other snippets. Experimental Setup. The goal of our current work is to use the code language model to assist two retrieval tasks: The Snippet Retrieval Task ($L \rightarrow C$) refers to the problem of retrieving the most relevant snippet $C$, given a natural language query $L$. The reverse problem, i.e. the Query Retrieval Task ($C \rightarrow L$) aims to retrieve a natural-language query $L$, given a specific snippet $C$. To evaluate the performance of the models, we sample at uniform 100 retrieval targets (i.e. snippets and queries pairs). Then, for each target pair, we sample 49 distractor targets. We then compute the probability of each of the query-snippet pairs and rank them according to the per production cross-entropy. Based on this ranking, we compute the mean reciprocal rank (MRR). As a baseline, we include a natural language only (NL-only) model that does not take into account the tree representation $c$ and thus has $s(v, L, C_{\subseteq v}) = I^\top r + b$. This model can be interpreted as a PCFG conditioned on natural language. 4.1. Synthetic Data We generate synthetic data on a limited domain to test the ability of the model to learn in a controlled environment. The Text dataset is concerned with simple text operations that may be performed in strings. String operations include splitting delimited strings (i.e. delimited with comma, tab, space or new line), upercasing or lowercasing the letters of a word, counting the number of characters, retrieving a substring of the a string, getting the length of a string and parsing a string to a double. An aggregation operation may also be applied; such operations include concatenating strings, finding the minimum or maximum of some (numeric) elements, summing, counting, finding distinct elements, averaging numerical elements and retrieving the first or last element of a list. We synthesize LINQ queries (MSDN) that correspond to all the type-correct operations and a large number of synthetic language queries for each snippet. The resulting dataset contains 163 code snippets with 27 natural language queries in each of the train, test1 and test2 datasets. We then train ($D = 20$, 100 iterations) and evaluate the logbilinear models on the synthetic data. Results are shown in Table 1. The multiplicative model performs the best, while the additive and the natural language models have inferior performance. Since this is a synthetic dataset, in a limited domain, we can achieve very high MRR. By using the model to generate snippets give code, we observe that it can correctly generate previously unseen snippets from new natural language queries. For example, the test natural language query “each element parse double separated by a tab and get max”, returns the snippet ```csharp var res=input_string.Split('\t').Select( (x) => Double.Parse(x)) .Max(); ``` The model was able to generate this snippet although it never saw the snippet before. However, the model had learned from the training snippets ```csharp var res=input_string.Split('').Select( (x) => Double.Parse(x)).Min(); ``` the correct mapping between the natural language and source code, generalizing successfully. 4.2. The Importance of Multiplicative Combination Looking at Table 1 the multiplicative model has a clear performance advantage. Indeed, the multiplicative and the additive models have different representational capacities. While the gating behavior of multiplicative models has been discussed previously (Memisevic & Hinton, 2007; Taylor & Hinton, 2009; Kiros et al., 2013), our aim here is to explain the importance of these multiplicative interactions in the context of source code modelling, and to point out a concrete difference in their representational abilities. Table 1. Mean Reciprocal Rank for the Text synthetic dataset for the two retrieval problems. <table> <thead> <tr> <th>Model</th> <th>Train</th> <th>Test 1</th> <th>Test 2</th> </tr> </thead> <tbody> <tr> <td>⊙ multiplicative</td> <td>0.986</td> <td>0.988</td> <td>0.921</td> </tr> <tr> <td>↑ additive</td> <td>0.890</td> <td>0.805</td> <td>0.919</td> </tr> <tr> <td>NL-only</td> <td>0.876</td> <td>0.817</td> <td>0.803</td> </tr> <tr> <td>⊙ multiplicative</td> <td>0.995</td> <td>0.995</td> <td>1.000</td> </tr> <tr> <td>↑ additive</td> <td>0.860</td> <td>0.883</td> <td>0.892</td> </tr> <tr> <td>NL-only</td> <td>0.917</td> <td>0.895</td> <td>0.845</td> </tr> </tbody> </table> Table 2. Synthetic example: Modality $L \in \{1, 2\}$ modulates modality $C \in \{a, b\}$ for the target space $\{p, q\}$. The composition operation $\triangleright$ can be either an addition or a multiplication. $$ \begin{align*} \text{Modality} & \quad \text{Required Relationships} \\ L & \quad C \\ 1 & \quad a \rightarrow p \quad (l_1 \odot c_a) \triangleright r_p \gg (l_1 \odot c_b) \triangleright r_p \\ & \quad \quad b \rightarrow q \quad (l_1 \odot c_b) \triangleright r_q \gg (l_1 \odot c_a) \triangleright r_q \\ 2 & \quad a \rightarrow q \quad (l_2 \odot c_a) \triangleright r_q \gg (l_2 \odot c_b) \triangleright r_q \\ & \quad \quad b \rightarrow p \quad (l_2 \odot c_b) \triangleright r_p \gg (l_2 \odot c_a) \triangleright r_p \end{align*} $$ in row major order ($I_1$) or column major order ($I_2$). Similarly, the context representation $c$ has two possible values, denoting the context of being about to declare an identifier within an outer for loop ($c_u$) or an inner for loop ($c_b$). Finally, assume our data always has $i$ as the row index and $j$ as the column index, and our goal is to choose whether to use $i$ ($r_p$) or $j$ ($r_q$). In this example, the natural language modality $L$ needs to act as a switch that inverts the meaning of the context. This can be done by the multiplicative model, but cannot be done by the additive model. Table 2 formalizes this claim. The required relationships come from writing down the constraints implied by the above situation. When substituting the composition operation $\odot$ with $+$, the $L^\top r$ terms cancel and we are left with ($L = 1$ case) $$c_a^\top r_p \gg c_b^\top r_p \land c_b^\top r_q \gg c_a^\top r_q \implies c_a^\top r_p \gg c_a^\top r_q \quad (4)$$ and ($L = 2$ case) $$c_a^\top r_q \gg c_b^\top r_q \land c_b^\top r_p \gg c_a^\top r_p \implies c_a^\top r_q \gg c_a^\top r_p. \quad (5)$$ which are contradictory and thus impossible to satisfy. The inadequacy of the additive model resembles the inability of a single perceptron to learn a XOR relationship. In contrast, it is easy to show that the multiplicative model (i.e. substituting $\odot$ with $\odot$) is able to satisfy the inequalities in Table 2. In early development of these models, we encountered situations where this issue manifested itself, and we believe (perhaps softer versions of it) to be an important property of multimodal models of source code. ### 4.3. Real-World Datasets We now create two real-world datasets to evaluate the model’s ability to achieve good performance in the retrieval tasks. We mine C# snippets from two sources: First, we use StackOverflow\(^1\), a question and answer site that is commonly used by developers. StackOverflow contains questions in natural language and answers that may contain snippets of code. StackOverflow data is freely available online though the StackExchange Data Explorer. We extract all questions and answers tagged with the C# tag and use the title of the question as the natural language query and the code snippets in the answers as the target source code. We filter out any questions that have less than 2 votes or answers that have less than 3 votes, or have been viewed less than 1000 times or have no code snippets or the code snippet cannot be parsed by Roslyn. We also remove snippets that contain more than 300 characters, assuming that longer snippets will be less informative. The goal of this filtering is to create a high-quality corpus with snippets that have been deemed useful by other developers. We use the abbreviation perls to refer to this dataset. Similarly, Dot Net Perls\(^2\) is a popular site with C# tutorials. We scraped the site for code snippets along with the natural language captions they are associated with. We refer to this dataset as perls-all. For both datasets, to increase the natural language data, we additionally use data from a large online general-purpose search engine adding search engine queries that produced clicks that led to any of the StackOverflow or Dot Net Perls web pages in the original dataset. We remove any queries that mapped to more than 4 different snippets, since they are probably vague. For each of the two datasets we create the three different sets, train, test1 and test2 as explained in the beginning of this section. The size of each of the resulting datasets are shown in Table 3 and samples of the datasets are shown in Table 4 and in the supplemental materials of this paper. Specifically, for Dot Net Perls, we also create the perls-captions dataset that contains only one natural language description for each snippet (the caption), excluding the queries from the general-purpose search engine. perls-captions does not have a test1 set, since we have no alternative natural language descriptions. We randomly sample five full datasets (train, test1, test2) from the original data and train our models. The evaluation results are reported in Table 5. The multiplicative model is achieving the highest MRR for both retrieval problems and overall the performance on the $C \rightarrow L$ task is significantly better than the performance in $L \rightarrow C$. Samples of retrieved queries ($C \rightarrow L$) are also shown in Table 4. Figure 3 shows how the performance of the models change for different values of $D$. As expected, when $D$ increases, MRR improves with diminishing returns. Also, the perls-all and perls-captions dataset which are smaller and more sparse have minor improvements for $D$ larger than 50 and are more prone to overfitting. Finally, the additive model fails to improve significantly as $D$ increases. **Qualitative Analysis.** To qualitatively analyse the performance of the models, we use the trained models as conditional generative models of snippets given an input test query. We do not expect the model to generate perfect snippets that match exactly the target snippet, but we hope to <table> <thead> <tr> <th>Snippets</th> <th>SO</th> <th>perls-captions</th> <th>perls-all</th> </tr> </thead> <tbody> <tr> <td>Train</td> <td>24812</td> <td>1467</td> <td>1467</td> </tr> <tr> <td>Test1</td> <td>17462</td> <td>-</td> <td>1467</td> </tr> <tr> <td>Test2</td> <td>11469</td> <td>328</td> <td>328</td> </tr> </tbody> </table> **Table 3. Size of the real-world datasets** \(^{1}\)http://stackoverflow.com \(^{2}\)http://dotnetperls.com ### Bimodal Modelling of Source Code and Natural Language <table> <thead> <tr> <th>$\mathcal{C}$</th> <th>$\mathcal{L}$</th> <th>Retrieval Results</th> </tr> </thead> </table> | while (number >= 10) number /= 10; | how to get ones digit, first digit of int, get first two digits of a number, get ones place of int, get specific digit, how to get the first number in a integer, get specific digit, how to get a digit in int, get the first 3 digits of a number | 1. **how to get ones digit** 2. string generate with number of spaces 3. check digit in string 4. number within certain range of another 5. integer between 3 and 4 | | string SearchText = "7,true,NA,\false:67,\false,\false:5"; string Regex = @"\btrue\b"; int NumberOfTrues = Regex.Matches(SearchText, Regex).Count; | count how many times same string appears, how to search a character maximum no in a file, how to count the number of times a string appears in a list, determine how many times a character appears in a string, how to search a character maximum no in a file, count how many times a string appears in another string | 1. string [] 2. setvalue letters 3. truncate string to a length 4. replace multiple groups 5. efficient way to count zeros in byte array | | using (var cc = new ConsoleCopy("mylogfile.txt")) { Console.WriteLine("testing 1-2"); Console.WriteLine("testing 3-4"); Console.ReadKey();} | write to file or console, copy all console output to file, console output to a file, console app output log, write console, send output to console window and a file, add log file to console app, | 1. do not overwrite file 2. **copy all console output** 3. open file path starting with 4. copy file in addition to file 5. hashing table | | path=Path.GetFullPathInternal(path); new FileIOPermission(FileIOPermissionAccess.Read, new string[] { path }, false, false).Demand(); flag = InternalExists(path); | check for file extension, how to tell if a directory exists, exist, determine a file exist on shared folder, check if list of files exists, how to tell if a directory exists createifexist file, excel file does not exist | 1. wpf get directory name from path 2. **determine a file exist on shared folder** 3. open file dialog class 4. create directory pathname 5. load binary file to variable | | using System; class Program { static void Main() { string input = "Dot Net Perls"; char[] array = input.ToCharArray(); for (int i = 0; i < array.Length; i++) { char let = array[i]; if (char.IsUpper(let)) array[i] = char.ToLower(let); else if (let == ' ') array[i] = '-'; else if (let == 'e') array[i] = 'u'; } string result = new string(array); Console.WriteLine(result); } } | how do i replace asingle string character, single character in array, modify string at, char to caps, single character in array, change string to, replace a string of characters, replace character in string position, change one char in a string, how to replace a character in a string at a position, how do i replace asingle string character, how to modify a char in string, replace at position string | 1. get number of character in a string 2. remove a value from a list 3. check if selected tab is null or not 4. **convert string to** 5. modify string at | **Table 4.** Examples from datasets and natural language query retrieval examples. To retrieve the queries we restrict the method by removing all natural language descriptions that are assigned to the target snippet, except for one. All samples from test2 datasets. Table 5. Macro-averaged Mean Reciprocal Rank (MRR) and Standard Error for 5 random splits of SO and for the two retrieval problems. $D = 50$ for all models. <table> <thead> <tr> <th>Model</th> <th>Test 1 m</th> <th>Test 2 m</th> <th>Test 1 a</th> <th>Test 2 a</th> <th>Test 1 n</th> <th>Test 2 n</th> </tr> </thead> <tbody> <tr> <td>multiplicative</td> <td>0.182 ± 0.009</td> <td>0.170 ± 0.012</td> <td>0.254 ± 0.017</td> <td>0.441 ± 0.036</td> <td>0.270 ± 0.016</td> <td></td> </tr> <tr> <td>additive</td> <td>0.099 ± 0.008</td> <td>0.105 ± 0.005</td> <td>0.152 ± 0.057</td> <td>0.078 ± 0.003</td> <td>0.106 ± 0.010</td> <td></td> </tr> <tr> <td>NL-only</td> <td>0.120 ± 0.008</td> <td>0.125 ± 0.005</td> <td>0.090 ± 0.002</td> <td>0.239 ± 0.024</td> <td>0.205 ± 0.013</td> <td></td> </tr> <tr> <td>multiplicative</td> <td>0.434 ± 0.003</td> <td>0.413 ± 0.018</td> <td>0.650 ± 0.012</td> <td>0.716 ± 0.007</td> <td>0.517 ± 0.012</td> <td></td> </tr> <tr> <td>additive</td> <td>0.218 ± 0.011</td> <td>0.211 ± 0.013</td> <td>0.356 ± 0.017</td> <td>0.426 ± 0.041</td> <td>0.309 ± 0.011</td> <td></td> </tr> <tr> <td>NL-only</td> <td>0.248 ± 0.008</td> <td>0.261 ± 0.008</td> <td>0.145 ± 0.013</td> <td>0.599 ± 0.018</td> <td>0.453 ± 0.015</td> <td></td> </tr> </tbody> </table> Figure 3. Performance on the datasets for different values of $D$. Graph shows the relative performance compared to the multiplicative model at $D = 50$ as shown in Table 5. see the correlations that have been learned from the data. Two random examples from the datasets follow. **perls-all Query:** dictionary lookup **Generated** ```csharp using System; using Generic; class Generic { static void dictionary() { dictionary.ContainsKey(ContainsKey); } } ``` **SO Query:** comma delimited list trailing comma **Generated** ```csharp foreach (string Split in Join) { return string.Join(',',' ``` 5. Related Work In recent years, the use of probabilistic models for software engineering applications has grown. Hindle et al. (2012); Nguyen et al. (2013); Allamanis & Sutton (2013); Tu et al. (2014) have argued that even simple \( n \)-gram-based models can improve over traditional code autocompletion systems. Maddison & Tarlow (2014) built a more sophisticated generative model of source code that is closely related to the source code model used in this work. Other applications include extracting code idioms (Allamanis & Sutton, 2014), code migration (Karaivanov et al., 2014), inferring coding conventions (Allamanis et al., 2014) and type inference (Raychev et al., 2015). Movshovitz-Attias et al. (2013) use simple statistical models like \( n \)-grams for predicting comments given a piece of source code. It is worth noting that comments are often focused on “why?” rather than “what?” which makes the task rather different. Gulwani & Marron (2014) synthesize a restricted domain-specific language for spreadsheets given a natural language query, using translation-inspired algorithms. Searching source code is still an active area, but most work (Bajracharya et al., 2014; Keivanloo et al., 2014) has focused on retrieving snippets given code tokens, or by retrieving snippets of code via text available in some surrounding context. Another related area is that of semantic parsing, where the goal is to map a natural language utterance to a logical description of its meaning (Zelle & Mooney, 1996; Zettlemoyer & Collins, 2005; Liang et al., 2013). The difference between our setting and these is that the connection between natural language and target code is much looser. We do not expect the natural language to describe step-by-step how to construct the code; a semantic parse of the natural language would bear little resemblance to the target source code. These systems also often require additional hand-specified knowledge about the mapping between natural language and the logical forms; for example, to create a lexicon. We note Kushman & Barzilay (2013) weaken the assumption that language and target are well-aligned in a natural language to regular expression application, but the method is specific to regular expressions, and the mapping is still more aligned than in our setting. 6. Discussion While the task considered in this work is very hard, the results are reasonably good. On the \( \text{perls-captions} \) data, we are able to rank the true caption for a previously unseen piece of code amongst the top few when presented against the distractors captions. As we move to noisier natural language (\( \text{perls-captions} \)) and noisier code snippets (SO), performance degrades, but only moderately. Interestingly, it appears easier to pick the proper natural language for a given piece of code than it is to pick the proper code for a given piece of natural language. We think this is due to the fact that there is less variability in the code, so picking apart subtle differences is more difficult. Another interesting note is how the models that incorporate structure of the code consistently and soundly outperform the models that do not. Our interpretation is that the models that ignore code structure force the model to account for correlations in the source code via the natural language, which makes the task harder while also increasing the risk of overfitting. While the dataset sizes we have used are moderate, we think a promising path forward is to find larger datasets. Some ideas for where to find these include generalizing the model to handle a larger set of programming languages, and/or learning from longer text snippets, as would be found in a programming language specification document. This will likely require more sophisticated representations in the natural language component. Finally, we think there is value in establishing the analog between multimodal source code and natural language models, and multimodal image and natural language models. As these models improve, more applications will become possible, and we are particularly excited by the potential for cross-fertilization between these two application areas. References
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/23728271/allamanis15.pdf", "len_cl100k_base": 9391, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 39332, "total-output-tokens": 12066, "length": "2e13", "weborganizer": {"__label__adult": 0.00041103363037109375, "__label__art_design": 0.0003685951232910156, "__label__crime_law": 0.00034236907958984375, "__label__education_jobs": 0.0007562637329101562, "__label__entertainment": 7.557868957519531e-05, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.00019681453704833984, "__label__food_dining": 0.00032448768615722656, "__label__games": 0.000453948974609375, "__label__hardware": 0.0006260871887207031, "__label__health": 0.00047135353088378906, "__label__history": 0.00018203258514404297, "__label__home_hobbies": 0.0001003742218017578, "__label__industrial": 0.0002739429473876953, "__label__literature": 0.00033736228942871094, "__label__politics": 0.00020229816436767575, "__label__religion": 0.00035381317138671875, "__label__science_tech": 0.016326904296875, "__label__social_life": 0.0001132488250732422, "__label__software": 0.005718231201171875, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0004270076751708984, "__label__travel": 0.00019168853759765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42912, 0.04233]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42912, 0.5438]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42912, 0.83639]], "google_gemma-3-12b-it_contains_pii": [[0, 1455, false], [1455, 5071, null], [5071, 10616, null], [10616, 16698, null], [16698, 21944, null], [21944, 27416, null], [27416, 31067, null], [31067, 32599, null], [32599, 36797, null], [36797, 41056, null], [41056, 42912, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1455, true], [1455, 5071, null], [5071, 10616, null], [10616, 16698, null], [16698, 21944, null], [21944, 27416, null], [27416, 31067, null], [31067, 32599, null], [32599, 36797, null], [36797, 41056, null], [41056, 42912, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42912, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42912, null]], "pdf_page_numbers": [[0, 1455, 1], [1455, 5071, 2], [5071, 10616, 3], [10616, 16698, 4], [16698, 21944, 5], [21944, 27416, 6], [27416, 31067, 7], [31067, 32599, 8], [32599, 36797, 9], [36797, 41056, 10], [41056, 42912, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42912, 0.1]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
73017509a5e04a89be5cfd42416ed536dd708d6c
The Horus and Ensemble Projects Accomplishments and Limitations Ken Birman, Bob Constable, Mark Hayden, Jason Hickey, Christoph Kreitz, Robbert van Renesse, Ohad Rodeh and Werner Vogels Abstract– The Horus and Ensemble efforts culminated a multi-year Cornell research program in process group communication used for fault-tolerance, security and adaptation. Our intent was to understand the degree to which a single system could offer flexibility and yet maintain high performance, to explore the integration of fault-tolerance with security and real-time mechanisms, and to increase trustworthiness of our solutions by applying formal methods. Here, we summarize the accomplishments of the effort and evaluate the successes and failures of the approach. Index Terms— Reliable multicast, fault tolerance, distributed systems security, distributed computing, automated verification, real-time cluster computing. I. A BRIEF HISTORY OF FAULT-TOLERANT PROCESS GROUP MECHANISMS FOR RELIABLE DISTRIBUTED COMPUTING We begin by reviewing the historical trends in distributed computing leading to the present research effort. Brevity prevents us from including a comprehensive bibliography, hence we focus on aspects most directly tied to our work. Readers seeking additional background are referred to [Bir97, Bir99a, Kes97]. Prior to 1985, distributed computing was dominated by the Internet, and the Internet (in turn) was dominated by point-to-point communication mechanisms providing best-effort reliability. Although certain services provided forms of replication (for example, the network news program, the DNS, "yellow pages" (later renamed NIS) and the Xerox Clearinghouse system), application-level support for replicated data was lacking, and even replicated services were typically hardwired to support predetermined sets of clients. Applications built over the Internet employed TCP, UDP and FTP, treating the implementations of these protocols and the mechanisms supporting Internet routing and DNS address resolution as opaque components of the network itself. During a period from 1983-1985, Cheriton and Zwaenapoel at Stanford extended the basic IP communication suite to support what they called distributed process groups [CZ85]. They used these groups in support of application-initiated broadcast, and proposed a number of group-based services which used broadcast to provide some form of parallelism or accelerated response. Inspired by their work, our fault-tolerant communications effort at Cornell proposed a distributed computing model based on process groups, but extended by the introduction of formal semantics for error handling [BJ87]. The approach that we proposed (soon adopted by several other research groups) became known as reliable process group computing, or virtually synchronous process group computing. In essence, a virtually synchronous process group provides automatically managed membership for application programs, which are permitted to join and leave groups and are informed of membership changes by upcalls. Also provided are multicast interfaces supporting ordered message delivery, and a means of initializing new members when they join the group - we call this the state transfer problem. To ensure that the solution is powerful enough to permit replication of data, membership change is synchronized with respect to multicast sending and delivery, and state transfer is implemented to appear atomic with respect to membership change. Figure 1 illustrates this model graphically, where time advances from left to right. We see a process group within which multicasts are being used to update the states of members. When new members join, state transfer is shown by a thick arrow. Notice that arrows in the figure are drawn to suggest that events occur synchronously - as if all members that experience the same event, experience it at the same moment in time. Virtual synchrony is "virtual" in the sense that without using a synchronized distributed real-time clock, a running program will be unable to distinguish an actual execution from some closely synchronous one, such as the one in the figure. In reality, however, the events in a virtually synchronous execution may be highly asynchronous. The major benefit of this approach is that by relaxing synchronization in ways that participating processes can't detect, we are able to provide very high performance. Yet the execution model is intuitively simple and makes it easy for application developers to implement very complex, fault-tolerant, distributed services. The virtual synchrony model was rapidly adopted by research and industry groups world-wide [Bir99a]. Some successes associated with our work on the Isis Toolkit include the overhead display systems that show stock price... **1. REPORT DATE** JAN 2000 **2. REPORT TYPE** **3. DATES COVERED** 00-00-2000 to 00-00-2000 **4. TITLE AND SUBTITLE** The Horus and Ensemble Projects Accomplishments and Limitations **5a. CONTRACT NUMBER** **5b. GRANT NUMBER** **5c. PROGRAM ELEMENT NUMBER** **5d. PROJECT NUMBER** **5e. TASK NUMBER** **5f. WORK UNIT NUMBER** **6. AUTHOR(S)** **7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES)** Cornell University, Department of Computer Science, Ithaca, NY, 14853 **8. PERFORMING ORGANIZATION REPORT NUMBER** **9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES)** **10. SPONSOR/MONITOR’S ACRONYM(S)** **11. SPONSOR/MONITOR’S REPORT NUMBER(S)** **12. DISTRIBUTION/AVAILABILITY STATEMENT** Approved for public release; distribution unlimited **13. SUPPLEMENTARY NOTES** Proc. of the DARPA Information Survivability Conference & Exposition (DISCEX ’00), January 25-27 2000 in Hilton Head, SC. U.S. Government or Federal Rights License **14. ABSTRACT** The Horus and Ensemble efforts culminated a multi-year Cornell research program in process group communication used for fault-tolerance, security and adaptation. Our intent was to understand the degree to which a single system could offer flexibility and yet maintain high performance, to explore the integration of fault-tolerance with security and real-time mechanisms, and to increase trustworthiness of our solutions by applying formal methods. Here, we summarize the accomplishments of the effort and evaluate the successes and failures of the approach. **15. SUBJECT TERMS** **16. SECURITY CLASSIFICATION OF:** <table> <thead> <tr> <th>a. REPORT</th> <th>b. ABSTRACT</th> <th>c. THIS PAGE</th> </tr> </thead> <tbody> <tr> <td>unclassified</td> <td>unclassified</td> <td>unclassified</td> </tr> </tbody> </table> **17. LIMITATION OF ABSTRACT** Same as Report (SAR) **18. NUMBER OF PAGES** 13 **19a. NAME OF RESPONSIBLE PERSON** Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 quotes and transactions on the floor of the New York Stock Exchange [Gla98], the entire communications architecture of the Swiss Exchange [PS97], the console clustering architecture used in a new generation of air traffic control technology recently rolled out in France [Bir99a], the control subsystem of several major VLSI fabrication plants (AMD, Siemens, Texas-Instruments), and a number of mobile telephony products. Military uses of the technology included an intelligence monitoring and reporting technology implemented by NSA, a prototype for the next generation of the AEGIS Naval radar and communications system (called HiperD, this was the basis of the SC-21 standard, which in turn is the basis for DD-21, an important military communications standard expected to have wide-ranging impact during the coming decades) and certain applications associated with Ballistic Missile Command and Control. These successes can be traced to the ability of the model to support replicated data, to provide high availability (in contrast to database replication methods, which guarantee recoverability but at the cost of sometimes needing to wait for a failed system to recover before the system can resume providing services - a source of potentially long outages), and to support load-balancing within small cluster-styled servers. Yet while these were important accomplishments, virtual synchrony also presented many drawbacks [Bir99a]. Early implementations of the model were monolithic and relatively inflexible: A system like Isis was built from floor to ceiling with one form of communication in mind, and to the degree that one wished to turn features on or off, the technology tended to come with huge numbers of specialized interfaces that the programmer needed to learn and use selectively. For example: - Isis supported several forms of message ordering. The more costly forms of ordering were also the easiest to use, but the performance hit was considerable. Developers were often forced to use selectively. For example: - When an application supports multiple, overlapping process groups, there are many options for the way that events should occur within processes belonging to more than one group. Control over these options was required because no simple default emerged. - Some applications required that messages be encrypted but this was a significant cost in Isis, so the feature was only enabled as needed. Moreover, at the time Isis was developed, the state of the art for object oriented design was very primitive. CORBA and DCOM/OLE had yet to be introduced, and even RPC had yet to be standardized. As a consequence, early systems like Isis, which used object-oriented designs and concurrent styles of programming were forced to introduce their own solutions. For example, Isis had a widely used threads implementation at the time that CMU first began work on pthreads, the threads library that ultimately became standard in Linux. Over time, developments in these areas made Isis less and less compatible with commercial trends. An additional side-effect of having large numbers of very demanding users was that Isis became more and more complex over time, and it became harder and harder to convince ourselves that the technology itself was free of bugs. Such developments created the concern that process group communication merely invites users to place all their eggs in one basket, and that the basket itself could break, exposing the entire application to mishap. Isis was relatively robust, but it took years to achieve continuous availability with the technology, and when an Isis protocol error surfaced, it could easily bring down an entire distributed system. II. GOALS OF THE HORUS AND ENSEMBLE PROJECTS Starting in 1990, we began work on the Horus system [RBM96] to try and overcome these first-generation considerations; Ensemble was subsequently started as a sibling of Horus in 1996. The basic idea underlying both projects is to support group communication using a single generic architectural framework within which the basic group communication interfaces are treated separately from their implementation. One can then plug in an implementation matching the specific needs of the application. To maximize flexibility, each group end-point instantiates a stack of what we call micro-protocols. The developer arranges for the stack used in support of a given group to provide precisely the properties desired from the group. Each micro-protocol layer handles some small aspect of these guarantees. Figure 2 illustrates this idea. Each process in a process group is supported by an underlying protocol stack; the stacks for the various Figure 1: Virtual synchrony model, showing executions of five processes (time advances left to right) members are identical, but the stacks used in different groups might be very different from one-another. For example, one layer might overcome message loss by numbering each message and retransmitting lost messages in response to Negative Acknowledgement messages (NAKs). The layer would have an outgoing and an incoming side. Outgoing messages would have a header added to them containing a sequence number, and would be stored pending garbage collection. Incoming messages would be examined to make sure they are in sequence. If not, a NAK message is sent back to the original sender of the message soliciting a retransmission. A separate layer detects when all copies have been delivered (a property called stability) and triggers garbage collection for stable messages. A trivial example of the opportunity afforded by such an architecture is that the NAK layer might not be needed in some situations - namely, those in which the network doesn't lose messages. At runtime, based on the environment, the Horus system was capable of assisting the user in configuring a stack to provide reliability, inserting a NAK layer if necessary and omitting it if not. Of course, useful protocol stacks often contain many layers and the kinds of layers one might selectively omit tend to be much more costly and less often needed than NAK. For example, security is often enabled selectively in Horus, the choice of protocol suite implementing virtual synchrony is often of importance (different protocols can give very different performance, and some are much better than others on specific hardware platforms), and different ways of doing multicast ordering are favored under different conditions. Without belaboring the point, the approach provides enough flexibility so that application designers with very different goals can potentially agree on the sharing of a common infrastructure, within which their commonality is captured by layers that they share, and their differences reflected by layers built specifically for their special needs. Layered Microprotocols in Ensemble Moreover, Horus can potentially support execution models very different from virtual synchrony. Our early hope was that the protocol interfaces could be offered as a standard, and that implementations of such protocols as SRM [Flo95] and RMTP [Pau97], two widely popular scalable protocols with weaker reliability models, might be developed to run on the same platform. Unfortunately, in 1999 as this paper was being written, discussion of possible standards along these lines were still advancing very slowly within the IETF and OMG, two organizations that have shown an interest in developing such standards. (Readers interested in a more detailed discussion of the SRM and RMTP reliability model and a comparison with virtual synchrony are referred to [Bir97, Bir99b]. The Horus effort did more than to simply support a layered stackable architecture. We also wanted to demonstrate that the performance of our architecture could be as good or better than that of a conventional monolithic architecture. We sought to provide real-time features, in response to a requirement coming from the Naval AEGIS application, where there was a need for cluster-style servers able to guarantee real-time event response even under stress (the application involved weapons targeting using radar tracks and had very tight time constants associated with acceptable responses). And whereas Isis was initially focused on securing its own abstractions, Horus was designed to offer security services on behalf of application developers who needed security key infrastructures for purposes of their own. One can easily imagine that in responding to such varied needs, Horus could become very complicated. Although we managed to control complexity, we did find that the types of transformations we wanted to do on layered protocol stacks exceeded the capabilities of the available C compiler, and hence that quite a bit of hand-coding was needed to obtain high performance and to maximize flexibility. It was in response to these considerations that Ensemble was developed, starting in the Spring of 1996. As a system, Ensemble is rather similar to Horus, although rewritten using high level programming languages and tools [Hay97]. Our insight was that much of the complexity of Horus came from overcoming inefficiencies associated with stackable protocol layers coded in the C programming language. In contrast, Ensemble’s protocol suite is implemented using the O’Caml variant of the ML programming language [Ler97, Mac93]. This language is mathematical in appearance and there are powerful theorem proving tools, notably a system called NuPRL [Con86], available for expressing transformations and other types of operations on programs coded using O’Caml. In our case, we were successful in using O’Caml to code a basic set of protocol layers for Ensemble and then using NuPRL to produce optimized and transformed versions that, in Horus, would have required hand coding and hand optimization. The NuPRL approach is automated and provably correct while the manual Horus approach was a source of bugs. Moreover, we discovered that NuPRL can potentially do quite a bit more for us. The remainder of this paper focuses on the successes and limitations of Horus and Ensemble. Both projects are largely at an end now - Ensemble and Horus are both used by modest communities and a number of technology transition efforts should lead to their emergence in products for the mass market within the next few years. Meanwhile, our own effort at Cornell now focuses on what might be seen as third-generation issues that work to move beyond the limitations of the entire process group approach. For example, we are increasingly convinced that virtual synchrony has some basic scalability limitations that emerge from the model itself, and our new Spinglass project\(^2\) was born out of an insight into a new way to develop a scalable reliability model to overcome these limits. Virtual synchrony, we now believe, is simply better suited to "close grained" cooperation on a scale of tens of members (certainly, less than one hundred group members), while the protocols we are using in Spinglass provide high reliability and steady data delivery to potentially thousands or millions of recipients. We imagine Spinglass as a technology one might use side by side with Ensemble or Horus, because it offers reliability guarantees that are provably weaker than those of the virtual synchrony model, and the virtual synchrony model remains necessary in many situations. An illustration of this arises in our discussion of the Ensemble security work, which combines Ensemble groups with Spinglass protocols. Similarly, we are continuing to work with NuPRL as a program verification and automated protocol transformation tool of unique power and flexibility. Whereas our initial work focused on using NuPRL to automate some of the protocol stack transformations needed to achieve high performance in stackable architectures, we are now pursuing a more ambitious goal: proving the correctness of the virtual synchrony implementation used in Ensemble, and perhaps of the security key management architecture running over this implementation. But this work, in turn, has revealed yet a third possible goal: the automated generation of provably-correct protocol stacks from relatively high level descriptions of goals. It thus may be possible to see Ensemble much as a compiler used to bootstrap a compilation process - one builds a basic compiler in a first language for a new language, but then implements a second compiler directly in the new language and discards the original one. Our work could yield, within a few years, a completely new and self-supporting infrastructure for building correct group-communication protocols and for optimizing them to achieve extremely high performance. The remainder of this paper focuses on the accomplishments and limitations of Horus and Ensemble and the major technical challenges we face in transitioning the technology into major commercial product platforms, such as CORBA and COM/OLE. ### III. HORUS EFFORT Our work on the Horus system can be understood in terms of several distinct threads of activity, which were all conducted within the same framework and to a large degree interoperate: Layering and its consequences, real-time issues, security mechanisms, and work on protocol performance and scaling. We consider these in turn. #### A. Layered Protocol Architectures in Horus, Performance Issues The initial focus of our work on Horus concerned its use of layering to simplify the design of the virtual synchrony protocols. When this work was begun, we looked closely at the x-Kernel architecture [PHO89], developed at the University of Arizona by Larry Peterson with similar goals. We found, however, that the x-Kernel was designed with point-to-point TCP-style protocols in mind. For our work on group communications, a more flexible and more standardized interface to each layer was needed. Accordingly, we developed what we now call the Horus Common Protocol Interface, or HCPI [vR95], as a standard interface to and between protocol layers. The interface provides "up", "down" and "control" API's, and operates under a model in which messages and other events travel from the user down the stack to the I/O interface, or from the I/O interface up to the user. For example, an encryption layer might receive outgoing messages from higher layers, use a key to encrypt the body of those messages, and then pass the message to the outgoing message interface of the next layer below. Incoming messages would, similarly, be decoded on arrival and discarded if corruption or tampering was detected. Over the 7 year period since this interface was first proposed, other groups including the OMG fault-tolerance standards group and the IETF reliable multicast research task-force have proposed creating standard architectural slots similar to the ones occupied by Horus protocol layers. Our effort has offered an updated HCPI interface to these organizations, but until the present, it seems that the aggressive use of layering adopted in Horus remains more advanced that what these organizations might consider. Layering gives rise to several forms of overhead. A message, traveling down the stack, may be examined by a whole series of layers, most of which do nothing at all to the message. When a layer does add a header to a message, it may need to assume that it is the only layer in the stack, hence to add even a single bit, a layer may need to create a header large enough to hold an integer. By the time a message reaches the wire, it may have many bytes of largely empty headers on it, and may have skipped through as many as 20 or 30 layers that basically took no action. --- During 1995, one of us tackled this issue, and developed a methodology for optimizing layers to avoid both forms of overhead [vR96]. Although this work was done separately from Horus, we considered it to be part of the overall technology base. In essence, the approach involves compressing the headers by eliminating wasted space, and also separating headers into different types of data. Header information that remains constant after a stack is established is only transmitted once, and a typical message only carries headers that actually contain changing values - potentially a very small amount of data. Messages are aggregated (packed) to make optimal use of network packets. And, through a decomposition of each layer into data touching and non-data-touching parts, it proves possible to short-circuit the path a message takes through the stack, reducing the critical path between the application and the wire to just a few instructions even for a very complex protocol stack. With these optimizations in place, the Horus Protocol Accelerator set a number of performance records. Running over a zero-copy communications architecture called U-Net [Von95] (similar to the Virtual Interface Architecture promoted by the VIA consortium), this version of Horus introduced only a few microseconds of overhead beyond the overhead of the network adaptor and drivers. B. Real-Time Cluster Computing Unfortunately, the ability to demonstrate high performance is not enough to achieve real-time responsiveness in some critical applications. Earlier, we noted that our work on Isis was adopted by the Navy for use in its AEGIS architecture. This system includes a number of cluster-style computer systems that are used to compute tracks for airborne objects detected by the AEGIS radar, and serve as the basis of weapons targeting applications. Since threats may be moving at very high speed, real-time response is vital even when failures occur within the cluster. A similar need arises in telecommunications switching architectures, where a co-processor may be asked how a call-establishment request should be routed; the SS7 architecture used in such settings requires 100ms response times even while a failure is handled. Working with our group, Roy Friedman explored the use of Horus as a technology for cluster control in real-time applications of this sort [FB96]. He considered two styles of solution. In the first, Horus was used to implement small process-groups of two or three processes each, using load-balancing and fault-tolerant RPC mechanisms within these to guarantee that each request would be handled even if one or more failures occurred while the cluster was heavily loaded. With this approach, Friedman was only able to achieve a throughput of a few hundred requests per second, and the Horus failure detection timer (six seconds) emerged as a performance limit: a request might potentially be delayed, if a failure occurred under heavy load, until the detection timer was triggered. For the sorts of applications just mentioned, such delays are totally unacceptable. Friedman then developed a different solution in which Horus operates as a side-band mechanism for cluster control and data replication, but "offline" from the basic request loop. In this approach, Friedman was able to aggregate batches of requests and used hand-coded, highly optimized protocols for the basic request dispatch and handling communication paths. During the period before Horus discovered a failure, data might pile up, but Friedman used a number of compression schemes to minimize the amount and avoid overloading available buffering. The approach was a dramatic success: for the SS7 telephone architecture, Friedman now achieved 20,000 requests per second on a 64-node cluster, demonstrated that performance improvements were possible when the cluster size was increased, and was able to sustain 100ms response times even as nodes were taken offline, crashed, or restarted while the switch was under load [FB96]. Friedman's work illustrates, for us, both the power of Horus and a limitation. The benefit of this work was that for the first time, a way to use a cluster of computers in a time-critical fault-tolerance application was demonstrated. Yet the work was technically complex and suggests that unless these methods can be embedded into a very low level of the operating system (for example, into the clustering technology of the NT Clusters system), application developers will have great difficulty exploiting the approach. Horus, viewed from the perspective of this type of real-time application, is a necessary tool, but not sufficient. On the other hand, for high-value applications such as the AEGIS tracking service, it does seem clear that Friedman's work points to a methodology for achieving very high degrees of scalability and real-time responsiveness while tolerating faults. C. Security in Group Communication Systems The Horus system was also the setting for our initial foray into security for groups of participants in large networks. Working with Mike Reiter [RBvR94, RB94], we developed a means of securing the virtual synchrony model itself, so that only trusted processes would be allowed to join a process group, and so that group members could obtain a shared group key. This problem involves authentication at the time of the group join, and rekeying when a member joins or leaves (so that prior communication in the group, or subsequent communication, would not be accessible to the new member). Our work on security can be seen as complementary to work on group security arising directly from the Internet community. Recall that the DNS and routing services of the Internet replicate various forms of data. During the early 1990's it became important to secure the protocols used to update these, and the resulting key distribution and management problem became a classical topic for the security research community. Here, the notion of group membership is much weaker than the one used in the virtual synchrony community, and there is no formal semantics for the execution model. Yet the superficial aspects of the security problem are very similar: we have a group of members, we wish to authenticate joining and leaving, and we plan to use the security keys to encrypt communication within the group. The Horus security mechanisms have advantages and disadvantages when compared to this more network-oriented form of security. The strong semantics of virtual synchrony groups certainly offers security benefits; within this model, one actually can formalize the question of which processes legitimately belong to a group and which ones do not, and when a process does belong to a group, there are strong guarantees about the state of the data it manages. But there are also disadvantages to the model, notably that it scales poorly beyond about 100 processes. Most experience with Isis was limited to groups of five to ten processes at a time [Bir99a] and it was only with great care that Isis applications spanning more than about 250 processes were developed successfully. In the Internet, 10,000 members of a DNS service might not be at all unreasonable and one can imagine services containing millions of members. Yet scaling virtual synchrony to this degree seems not to be practical. (Our new project, Spinglass, might well provide this degree of scalability, but it uses a somewhat different execution model). IV. ENSEMBLE EFFORT Earlier, we cited the "eggs in one basket" concern in regard to distributed systems models such as virtual synchrony, or process group security. While such approaches are beneficial to the application designer, whose task is greatly simplified by the strong guarantees of the system, if the model itself is violated as a consequence of a coding error or some unanticipated bug in the protocol itself, the application’s correctness or security might be compromised. One can reduce such concerns by exhaustive testing, simulation, or by writing papers in which the protocols employed by the system are presented rigorously and a formal proof of correctness is offered. Yet none of these options yields more than a modest degree of confidence in the ultimate correctness of the running code itself. Even now, more than a decade after the development of Isis, Isis applications that seek to provide continuous availability still exist, and occasionally, one of them reports a bug never before encountered. Such bugs can easily cause the entire distributed application to crash. As noted earlier, Horus was developed as a partial response to this concern: the technology sought to simplify the monolithic structure of systems like Isis by showing how complex protocols could be broken into simple microprotocols and stacked to match the needs of an environment. Yet Horus offers little to increase the confidence of a skeptic in the ultimate correctness of the protocols and of their implementations. Ensemble was developed primarily as a response to these concerns. Our fundamental idea was to begin using a new and extremely powerful generation of mathematically rigorous programming and verification tools as a means of moving beyond the hand-coded optimization schemes employed when performing inter-layer optimizations in Horus, and of actually proving the correctness of key components of the system. The technology evolved in several new directions, however, as time passed: we used Ensemble as the basis of initial work on a new protocol suite, and pursued a number of topics involving dynamic adaptation using Ensemble as the base. We also developed a new security architecture within Ensemble, moving well beyond the initial Horus version. This section summarizes each of these threads of research. A. Formal Transformation of Protocol Stacks Code transformation of the Horus system was impractical in part because of the choice of programming language: by coding Horus in C, we were able to achieve extremely high performance, but this language has limited capabilities for type checking and other types of correctness checking, and such weak mathematical semantics that formally expressed code transformations are largely impossible. Accordingly, a primary reason for building a new system - Ensemble - was to create a version of Horus coded in the O’Caml programming language, a dialect of ML having strong semantics and consequently suitable for analysis and transformation using formal programming tools. This decision was informed by previous success in using NuPRL to reason about a large ML system [AL92] and to reason about hardware [LLHA94]. We were also encouraged by the work at CMU by Harper and Lee on the FOX project to code protocols in SML [HL94]. Our decision to implement Ensemble in O’Caml compelled us to confront an initial challenge of a different nature. The ML family of languages is not traditionally known for high performance, and while O’Caml is compiled, we were concerned that it might not be possible to achieve performance comparable to that of Horus. Yet the verification of a system incapable of the desired level of performance would have been much less satisfying, since we hoped to demonstrate that production-quality distributed software can actually be proved correct. Mark Hayden, who coded the system, undertook a detailed study of this issue, and ultimately developed a methodology for protocol development in O’Caml that overcomes the most common efficiency issues encountered by users of the language. His accomplishment, which involved taking control of garbage collection and using O’Caml’s language features very carefully, was reported in [KHH98]. Given an initial version of Ensemble, Hayden, Kreitz and Hickey set out to use a formal mathematical tool called NuPRL (”new pearl”) to automate the sorts of optimizations that Van Renesse did by hand in developing Horus [vR96]. They approached this by teaching NuPRL to read Ensemble layers - in effect, NuPRL understands each layer as the "proof" of some property, namely the protocol guarantee implemented by that layer. NuPRL was then able to do several kinds of protocol transformations. For example, because a protocol stack appears as a nested function call to NuPRL, it was possible to request that NuPRL perform an inline function expansion of the code. The basis for all formal code manipulation is a formal semantics for a large subset of the O'Caml programming language in the logical language of NuPRL. Not long ago, the formalization of such a subset would have been cutting edge research worthy of separate funding and a PhD thesis, but in this case, were able to build on advances in understanding of formal semantics and on the richness of the NuPRL type theory. Basically the core of O'Caml is a subset of the NuPRL term language, and therefore type theory almost immediately provides a semantics for O'Caml [Kre97]. The method is now called a "shallow embedding". This method has been used to provide a formal semantics for significant extensions of ML in the direction of object orientation, see the work of Crary for example [Cra98]. NuPRL can perform partial evaluation of functions, and this opened the door to a category of optimizations similar to those used by Van Renesse. The approach begins by recognizing that as messages traverse a stack, the code path used may be a very small percentage of the code in the stack as a whole. For example, the virtual synchrony stack treats membership change events very differently from multicasts. If a message is a multicast and no membership change is occurring, the message may be nearly untouched within the stack. A protocol stack in Ensemble looks like a set of nested function calls. Suppose that x is some form of outgoing event, such as a message to send or a membership change request. Then, Ensemble’s job is to evaluate $f_0(f_1(\ldots f_n(x)))$, where each of the $f_i$ is the code implementing some micro-protocol within the stack ($f_0$ is at the bottom and $f_n$ is at the top). Similarly, for an incoming event, Ensemble can be understood as evaluating the function $f_n(f_{n-1}(\ldots f_1(x)))$. Now, focus on the outgoing case, and suppose we call this entire nested function $f$. Imagine that we place an if statement in front of it, as follows: "if(is_a_msg(x)) f(x) else f(x)," where the predicate is_a_msg is true for messages and false for other types of events, such as group membership changes. (Not shown is an additional, implicit argument: the “state” of the protocol stack, which is updated when the stack executes and hence is shared by both function invocations). Viewing NuPRL as a form of optimizing compiler, the system can be asked to partially evaluate the function under the two cases: "is_a_msg(x)" is true, and "is_a_msg(x)" is false. Consider the first case: the predicate is true. Under the circumstances just described, very little code needs to be executed for messages, hence the function will collapse to just a few lines of code. "Dead" code branches (those NuPRL can recognize as never being executed) are deleted during the partial evaluation. In effect, we've produced an extremely optimized code path for the common case where we sent a multicast. Yet since the event either is or is not a message, the behavior of the original stack is unchanged! To generate the optimized code while guaranteeing its correctness NuPRL uses two levels of formal optimizations. - On the first, or static level, symbolic evaluation and logical simplification techniques are applied separately to the code of each micro-protocol. They result in formally proven layer optimization theorems, which show that the effect of passing an event x through the respective protocol layer, while assuming that a common case predicate (CCP) like "is_a_msg(x)" (or some other property of common events) holds, can be expressed by two or three lines of code. These optimizations are executed independently from the application protocol stacks and need to be redone only when the code of a micro-protocol is modified or when new micro-protocols are added to the Ensemble toolkit. - The second, or dynamic level, depends on the particular protocol stacks designed by the application developers. Given the names of the individual micro-protocols occurring in this stack, it composes the corresponding layer optimization theorems into a formal stack optimization theorem that describes the effect of passing an event x through the whole stack while assuming that all individual CCPs hold. This is not trivial, because Ensemble's composition mechanism allows that events may bounce between layers before leaving the stack instead of passing straight through it. Therefore the technique is based on composition theorems, which abstractly describe the effects of composing common combinations of optimized micro-protocols. A stack optimization theorem not only describes a fast-path through a protocol stack but also the headers that the stack adds to a message. Since typical messages should only carry headers that actually contain changing values, all constant headers are eliminated before sending the message. For this purpose, the protocol stack is wrapped with code for compressing and expanding headers and then optimized again. In a final step the stack optimization theorems are converted into O'Caml code, which uses the CCPs as conditionals that select the bypass path in the common case and otherwise the normal stack, as illustrated in Figure 3 (The Transport module below the stack provides marshaling of messages). This program is proven to be equivalent to the original protocol in all cases, but generally more efficient in the common case. Using this methodology [Kre99], Kreitz, Hayden, Hickey, van Renesse and graduate student Xioaming Liu showed that NuPRL can automatically achieve protocol speedups. comparable to the ones that Van Renesse achieved by hand in Horus. Moreover, Liu and Van Renesse developed a version of the method for use in adapting Ensemble to match the protocol stack to the environment. With their work, one can dynamically pick a protocol stack with just the properties needed for a given setting, obtain an optimized version of the stack from NuPRL, compile the resulting O’Caml byte code into machine code, ship this code to a version of Ensemble, and run it as the protocol stack supporting some group. At present, we do the optimization offline, but the methodology could be extended to work on the fly. This work is reported in [Liu99]. **Figure 3: Optimization in the Ensemble Architecture** B. Verification of Ensemble stacks Formal verification of a system such as Ensemble presents a major challenge because its many micro-protocols (currently over fifty) can be combined in literally thousands of different ways to provide a great variety of services. To verify Ensemble would mean to be able to treat the reasonable combinations of stacks and their services. Moreover, many of the individual protocols are quite sophisticated. Additionally it is a major challenge to verify the actual system code -- indeed, most verification efforts operate by verifying the correctness of abstracted descriptions of protocol or system components. While verifying these abstractions is an interesting first step, the goal that appealed to us was to work directly with production-quality protocol implementations, and ultimately to prove that the code actually running in the system has the properties required by the user. This goal was a natural extension of our success in transforming the actual O’Caml code of Ensemble stacks. A key step in verification is the specification of properties. This has been a lively topic in the Horus and Ensemble research. Various temporal logics were tried, including TLA [Kar97], and an axiomatic approach was considered [CHTCB96]. In the end, the ground work for our approach was laid by Hickey and Van Renesse in collaboration with Nancy Lynch, whose research effort at MIT has developed a mathematical programming language called I/O Automata, or IOA [Lyn96]. Jointly with Lynch, they found a way to express virtual synchrony as an IOA, adapted the IOA language itself so that NuPRL could understand such a specification, and extended IOA so that it could be used in compositional settings, such as the Ensemble protocol stacks [HLV99]. We have found a particularly elegant way to formalize IOA using NuPRL class theory [BH98]. We can formalize both services and their implementations in the same style. Moreover the inheritance mechanisms of the formal class theory make it possible to inherit proofs of safety properties of a stack when new layers are added to it. This leads to a methodology that will make it easier to prove properties of stacks from proofs of components. The method seems to scale to stacks composed from many layers from a large base of micro-protocols. When this work has been completed, we will be able to produce highly optimized executable code from provably correct protocol stacks, dynamically adapting the stack to match the environment where the protocol will be used, and providing guarantees of reliability and security formally verified by a mathematical tool and characterized in a high level notation suitable for use in reasoning about applications built over the resulting process group. C. Synthesis of layers Once we know how to prove the correspondence of code fragments to IOA statements, we can also understand this as a means for compiling from IOA into protocols (which we can optimize using our trace driven optimization methodology). So the potential exists to avoid using hand-coded Ensemble protocol stacks in favor of these more automated protocol stacks produced using NuPRL as a compiler. The basis of this synthesis capability is the fact that global services, abstract protocol specifications and code layers are treated as modules in a common formal language. We can use our formal framework not only for verification purposes but also for the synthesis of protocols from specifications. Because the generated code is correct by construction, our framework supports the development of new communication protocols, which is a notoriously difficult task. A synthesis of protocol layers that implement a global service will be supported by a small collection of generic methods for transforming specifications of distributed systems. This methodology is not entirely new. Synthesizing algorithms from specifications by applying specification transformations is a well-known principle in program synthesis, and synthesis from proofs has been explored for many years as well [Wal69, BC85, Kre98]. The novelty of our approach lies in providing methods for the synthesis of distributed systems, which is by far more difficult than synthesizing serial algorithms. We describe four of the most common generic methods. 1) Replication of Global Values Applying this method will allow us to represent values locally. It creates a local copy of each value, ensures consistency by making each process broadcast these values, and introduces a unique token. Virtual synchrony (as specified in IOA, Extended Virtual Synchrony, or EVS) is a prerequisite for this step; lacking this property, consistency has to be ensured by more complex means. 2) Adding Fault Tolerance This method, which requires local values as prerequisite, makes the local values fault tolerant. It forbids access to values during a view change and implements merging of multiple copies after a view change. 3) Conversion to Message Passing This method assumes fault tolerance and generates the typical event-handler interface of Ensemble by converting each operation to a message. 4) Code Generation This method runs as the last step of a synthesis. It takes a specification that provides the event-handler interface and converts the abstract specification into executable code. Because of the close relation between abstract specifications and the representation of Ensemble's code in NuPRL, this step is straightforward. As an example, we describe the synthesis of a total order layer from the specification of the total order service. The total order service has a simple specification. In addition to the properties of EVS, it ensures that all processes in a view receive messages in the same order, and is called ETO. The total order specification represents this requirement with a global queue that orders all the messages sent in the view. A message can be received only if it is the next message in the global queue. The ETO specification is not directly implementable because its global queue contains global information, so the first step is to replicate the global value so that each process contains a local copy of the queue. The Replicate generic method introduces a token to ensure atomic access and consistency of the local copies, but the copies are not robust to failures. The next step is to apply the Fault Tolerance generic method, which prohibits access to the queue during a view change, and creates a fresh queue once the view change is completed. At this step, we have partitioned the service into local protocol layers, and in the final step we apply the Convert to Message Passing generic method to convert the layer actions (which refer to semantic events like view changes) to explicit message passing style. This final step can be completely automated. D. Scalable Security Architectures in Ensemble Ensemble has also been used as a base for an expanded effort to provide security for process group applications and systems [RBD99, RBH98]. As noted earlier, our work on the Horus system resulted in an initial mechanism for securing process group communication and the group abstraction. In our work on Ensemble, we have focused on expanding the capabilities of this basic idea and using group security keys in innovative ways. With respect to the basic security architecture, Ensemble implements a scheme (developed primarily by Ohad Rodeh, a research in Dolev's Transis group at the Hebrew University in Jerusalem) whereby asymmetric public keys can be "traded" for symmetric point-to-point keys and group keys. These symmetric keys provide a high speed path for signing messages and encrypting their contents, and can also be used by secured applications for application-specific security purposes. Rodeh's solutions are innovative in two ways. First, he employs a hierarchy of protocols for key management and key refresh, and has a particularly fast solution to the key refresh problem when processes join or leave a group. His algorithm rekeys within milliseconds, permitting a group to (potentially) change keys so rapidly that even if a key were broken, the adversary would gain access to just one or two multicasts before a new key was substituted for the compromised one [RBH98]. Additionally, Rodeh developed a fault-tolerant extension of the well-known Wong-Gouda-Lam tree-based key management architecture [Won98], a protocol that in its original form was centralized and not fault-tolerant. Rodeh's version scales easily within Ensemble's process groups, although these remain limited to perhaps one hundred or two hundred members [RBD99]. The scalability limits of Ensemble prevent Rodeh from using this technique to secure extremely large groups, which might have tens of thousands of members. For this purpose, however, Rodeh is exploring a very simple combination of Ensemble with our new Spinglass system. As mentioned earlier, Spinglass introduces a new and extremely scalable data point in the reliable group communications spectrum, offering a suite of probabilistic protocols that can be used to communicate reliably with huge numbers of processes even in networks subject to the most extreme forms of disruption. Rodeh is linking the secured Ensemble group mechanisms to the Spinglass mechanisms so that Ensemble can control a core group and this, in turn, can manage security keys on behalf of a very large group of leaf nodes. Inspired by the DNS architecture for the Internet, this approach allows typical users to talk to a local representative of the security hierarchy and low cost, while drawing on the strong properties of Rodeh’s Ensemble solution to guarantee rapid key refresh and other aspects of a strong security model. E. Dynamic Adaptation The third major research topic that has been explored primarily within the context of our work on Ensemble is concerned with dynamic adaptation. As described previously, the adaptation problem arises when an application expresses requirements for a process group that can be satisfied in more than one way, depending upon the environment within which the group runs [Liu99]. Two examples will illustrate this idea. The first is concerned with multicast ordering protocols. Over the past two decades, a great number of ordered, reliable multicast protocols have been developed. Abstractly, it is common to view such protocols as representing optimizations of total ordering. For example, in a process group where we happen to know that only one member will initiate multicasts, a protocol providing fifo ordering would actually be "strong enough" to satisfy a total ordering requirement. Even in a group where there are multiple senders, one faces such a choice. If the senders are willing to use a token passing or locking scheme, the most appropriate total ordering protocol would be one that exploits the mutual exclusion property - these include causal ordering protocols and token-based total ordering protocols. Lacking a locking scheme, one might still use a token based protocol, but other protocols such as time-stamped ordering protocols now have potential advantages. Broadly, the best choice depends upon the nature of the application, and the nature of the environment within which we run it. Similarly, the most appropriate security mechanism depends on the setting. Behind a firewall, one may face only a very benign security requirement, and limit "security" to some form of join authentication. Yet when the same application includes a group member running outside the firewall, it may be important to encrypt all group communication, and to authenticate even the messages used within the join protocol itself. Virtual synchrony offers an appealing way to think about adaptation. Each time the membership of a group changes, the virtual synchrony model announces this through a new process group view, synchronized with respect to ongoing communication among group members. Normally, the view is just a list of group members, ranked in some canonical manner. Our work on adaptation extends the same idea: each time the view changes the new view also includes a new protocol stack for each of the members. Within Ensemble, we’ve used this mechanism to change protocol stacks on the fly, with roughly the same (very small) overhead incurred when group membership changes because a process joins, leaves or crashes. What we do is to associate with each stack a small software module responsible for monitoring the environment, watching for conditions under which some other stack would be more desirable than the one currently in use. When conditions change, this module triggers the new-view algorithm, arranging that the new stack be installed at all members in a virtually synchronous manner. For example, if we are using a total ordering protocol that is appropriate only with a single sender within a group, adaptation might be triggered when a second process first attempts to send: the attempt would cause the group to switch to a new total ordering protocol, at which point the interrupted send request would be allowed to complete. If we are using a security mechanism appropriate only within a firewall, adaptation might occur when a member from outside the firewall joins the group, and so forth. V. TECHNICAL CHALLENGES ASSOCIATED WITH TRANSITION One of the most difficult problems we’ve encountered in our research has been the challenge of technology transition. Broadly, it has been our belief that unless these technologies make the transition from academic demonstrations into broad commercial availability, they will not have the degree of impact that we seek and that DARPA hopes for in projects such as this. Yet in the area of networking and distributed computing, there is tremendous resistance to doing anything other than what the basic Internet supports. As noted earlier, our first project, the Isis Toolkit, achieved some degree of commercial uptake and had some notable successes. Through this process, Isis became a viable option for military and government projects and had an important influence on technology planning within the services, reflected in the DD-21 architecture. However, starting a company to commercialize Horus and Ensemble was unappealing to us. Our effort has made both systems available to "all comers" with very few restrictions of any kind, and at no fee. Cornell and the original Ensemble developers - Hayden and Vaysburd - provide some support, also for free. This has resulted in some uptake of the technology, and we have a small user community that includes some large companies (notably, Nortel and BBN) and many small ones. An exciting recent opportunity involves technology transfer into the restructured electric power grid. We are pursuing this topic jointly with a consortium organized by the Electric Power Research Institute, EPRI. To see a broad-based transition occur, companies of the size of Microsoft and Sun need to become interested in this technology area. There is some good news on this front: the industry as a whole is looking at group communication tools closely, and several standards organizations have been exploring possible standards for reliable multicast and object replication. As these trends advance, one can anticipate a first-class role for reliable multicast in standard operating systems, such as Solaris, Linux and NT, and with that development, technologies such as ours would find a natural home. Overall, we are encouraged by these developments, but we also see an argument for winding down the effort on Horus and Ensemble in favor of new directions. Accordingly, while continuing work on the NuPRL verification methodology, our overall project has shifted attention to a new technology based on a suite of highly scalable “gossip-based” protocols with probabilistic properties [Bir99b]. The protocols and their behavior take us into a domain rather different from virtual synchrony, while we wait and watch to see what the ultimate impact of our work over the past few years will be. VI. CONCLUSIONS Cornell research has placed process group computing, especially with virtual synchrony, or a firm footing. Over the course the three projects we've conducted in this area, we've contributed techniques for achieving high performance, security, real-time guarantees and provable correctness. Although we were not able to scale virtual synchrony to very large settings, the gossip-based protocols developed in our work on Ensemble have taken on a life of their own, and form the basis of a new project (Spinglass) that promises to contribute a new and extremely scalable data point for the spectrum of reliable multicast protocols and applications. Over the years, the Cornell effort has helped enable some very high profile applications, and we've played an instrumental role in showing that these solutions really work. Down the road we expect that the technologies we pioneered will have a good likelihood of becoming standard in products from major vendors, that IETF standards will emerge for the area, and that CORBA will provide solutions - all traceable to our work and that of related projects. Our vision of the future is one in which reliable process group computing will reside side-by-side with scalable probabilistic technologies, of the sort now under development in our Spinglass system. Through this approach, we believe that reliable, secure distributed computing can become a commonplace reality on a wide range of systems spanning both large-scale WAN applications based on conventional workstations down to massive networks of very small devices such as might arise in future miniaturized control settings or future sensor networks. VII. REFERENCES
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a532679.pdf", "len_cl100k_base": 11430, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 42430, "total-output-tokens": 14624, "length": "2e13", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.0004024505615234375, "__label__crime_law": 0.0004525184631347656, "__label__education_jobs": 0.00140380859375, "__label__entertainment": 0.00013339519500732422, "__label__fashion_beauty": 0.0001832246780395508, "__label__finance_business": 0.0004603862762451172, "__label__food_dining": 0.0003693103790283203, "__label__games": 0.0006561279296875, "__label__hardware": 0.0026683807373046875, "__label__health": 0.00080108642578125, "__label__history": 0.0006375312805175781, "__label__home_hobbies": 0.00014507770538330078, "__label__industrial": 0.0007829666137695312, "__label__literature": 0.0003573894500732422, "__label__politics": 0.00036716461181640625, "__label__religion": 0.0005736351013183594, "__label__science_tech": 0.41796875, "__label__social_life": 0.00014412403106689453, "__label__software": 0.0159912109375, "__label__software_dev": 0.5537109375, "__label__sports_fitness": 0.00031757354736328125, "__label__transportation": 0.0008950233459472656, "__label__travel": 0.0002474784851074219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66548, 0.03474]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66548, 0.38098]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66548, 0.93455]], "google_gemma-3-12b-it_contains_pii": [[0, 4800, false], [4800, 6708, null], [6708, 11499, null], [11499, 16664, null], [16664, 22563, null], [22563, 28610, null], [28610, 34600, null], [34600, 40469, null], [40469, 44846, null], [44846, 50454, null], [50454, 56099, null], [56099, 61043, null], [61043, 65606, null], [65606, 66548, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4800, true], [4800, 6708, null], [6708, 11499, null], [11499, 16664, null], [16664, 22563, null], [22563, 28610, null], [28610, 34600, null], [34600, 40469, null], [40469, 44846, null], [44846, 50454, null], [50454, 56099, null], [56099, 61043, null], [61043, 65606, null], [65606, 66548, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66548, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66548, null]], "pdf_page_numbers": [[0, 4800, 1], [4800, 6708, 2], [6708, 11499, 3], [11499, 16664, 4], [16664, 22563, 5], [22563, 28610, 6], [28610, 34600, 7], [34600, 40469, 8], [40469, 44846, 9], [44846, 50454, 10], [50454, 56099, 11], [56099, 61043, 12], [61043, 65606, 13], [65606, 66548, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66548, 0.01422]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
e7ec91a68ef3f6e01ab815bc724d2ae90e5aeec5
B1700 DESIGN AND IMPLEMENTATION by Wayne T. Wilner, Ph.D Burroughs Corporation Santa Barbara Plant Goleta, California May 1972 ABSTRACT Burroughs B1700 embodies a unique design tenet: the work done to accommodate definable machine structure from instruction to instruction is less than the work wasted from instruction to instruction when one machine structure is used for all applications. In other words, execution of machine language using procrustean hardware causes more inefficiencies than soft interpretation of arbitrary machine language on protean hardware. The programs on the B1700 are not represented in B1700 machine language but in "S-language", that is, some other computer's machine language or a machine language invented (by Burroughs or by the user) expressly for the program's application area. Interpretation of S-language is aided by: (a) bit-addressable memory, (b) memory access via address fields, (c) equal memory speed on all bit locations and bit string lengths, (d) clock-by-clock control over the effective width of processor data paths, registers, and structured logic elements, (e) soft microprograms, (f) English language microprogramming, (g) execution of microcode independently from main memory or control memory, (h) enough control memory to hold four interpreters and no limit on the number of interpreters that may be active at any given instant, (i) memory protection, (j) hard and soft interrupts, (k) stack organization, (l) Master Control Program taking full responsibility for efficient system utilization by means of interpreter multiprogramming and multiprocessing, user multiprogramming and multiprocessing, virtual control memory of $2^{24}$ (over 16 million) bits, virtual main memory of $2^{44}$ (over 17 trillion) bits, soft interpretation of I/O commands, and (m) automatic program profile statistics. Programs on the B1700 are independent of: location, storage organization, I/O organization, processor organization, peripheral idiosyncrasies, mix size, and system configuration (except for unique devices). Priced in the lowest system range, the B1700 offers time- and money-saving features not found on systems 100 times more expensive. The net result of this advanced system design is ease of use, simple programming, lack of conversion costs, improved utilization of system components, and better price/performance. Keywords: computer architecture, microprogramming, definable structure, B1700, S-language, interpretation 1. INTRODUCTION Procrustes was the ancient Attican malefactor who forced wayfarers to lie on an iron bed. He either stretched or cut short each person's legs to fit the bed's length. Finally, Procrustes was forced onto his own bed by Theseus. Today the story is being reenacted. Von Neumann-derived machines are automatis malefactors who force programmers to lie on many procrustean beds. Memory cells and processor registers are rigid containers which contort data and instructions into unnatural fields. As we have painfully learned, contemporary representations of numbers introduce serious difficulties for numerical processing. Manipulation of variable-length information is excruciating. Another procrustean bed is machine instructions, which provide a small number of elementary operations, compared to the gamut of algorithmic procedures. Although each set is universal, in that it can compute any function, the scope of applications for which each is efficient is far smaller than the scope of applications for which each is used. Configuration limits, too, restrict information processing tasks to sizes which are often inadequate. Worst of all, even when a program and its data agreeably fit a particular machine, they are confined to that machine; few, if any, other computers can process them. In von Neumann's design for primordial EDVAC, rigidity of structure was more beneficial than detrimental. It simplified expensive hardware and bought precious speed. Since then, declining hardware costs and advanced software techniques have shifted the optimum blend of rigid versus variable structures toward variability. As long ago as 1961, hardware of Burroughs B5000 implemented limitless main memory using variable-length segments. Operands have proceeded from single words, to bytes, to strings of four-bit digits, as on the B3500. The demand for instruction variability has increased as well. The semantics of the growing number of programming languages are not converging to a small set of primitive operations. Each new language adds to our supply of fundamental data structures and basic operations. This shifting milieu has altered the premises from which new system designs are derived. To increase throughput on an expanding range of applications, general-purpose computers need to be adaptable more specifically to the tasks they try to perform. For example, if COBOL programs make up the daily workload, one's computer had better acquire a "Move" instruction whose function is similar to the semantics of the COBOL verb MOVE. To accommodate future applications, the variability of computer structures must increase, in yet unknown directions. Such flexibility reminds one of Procrustes, the mythological being who forced all guests to fit into his bed. Such change to that of any creature. 2. DESIGN OBJECTIVE Burroughs B1700 is a protean attempt to completely vanquish procrustean structures, to give 100% variability, or the appearance of no inherent structure. Without inherent structure, there are no word sizes or data formats—operands may be any shape or size, without loss of efficiency; there are no a priori instructions—machine operations may be any function, in any form, without loss of efficiency; configuration limits, while not totally removable, can be beyond state-of-the-art extremes; modularity may be increased, to allow miniconfigurations and supercomputers using the same components. 2.1 Design Rationale The B1700's premise is that the effort needed to accommodate definability from instruction to instruction is less than the effort wasted from instruction to instruction when one system design is used for all applications. With definable structure, information is able to be represented according to its own inherent structure. Manipulations are able to be defined according to algorithms' own inherent processes. As long as one can define a machine environment which is more efficient for solving one's problems than a contemporary machine design, one can attain more throughput per dollar. As we shall see, there are novel machine designs which are 10 to 50 times more powerful than contemporary designs, and which can be interpreted by the B1700's variable-image processor using less than 10 to 50 times the effort, resulting in faster running times, smaller resource demands, and lower computation costs. 3. GENERAL DESIGN To accomplish definable structure, one may observe that during the next decade, something less than infinite variability is required. As long as control information and data are communicated to machines through programming languages, the variability with which machines must cope is limited to that which the languages exhibit. Therefore, it is sufficient to anticipate a unique machine environment for each programming language. In this context, absolute binary decks, console switches, assembly languages, etc., are included as programming language forms of communication. Let us call all such languages "S-language" (as for "simulated" or also for "semi" or "sourced" or "specialized" or "emulated"). Machines which execute S-language directly are called "S-machines". The B1700's objective, consequently, is to emulate existing and future S-machines, whether these are 360's, FORTRAN machines, or whatever. Rather than pretend to be good at all applications, the B1700 strives only to interpret arbitrary S-language superbly. The burden of performing well in particular applications is shifted to specific S-machines. Throughput measurements, reported below, show that the tandem system of: ``` application program interpreted by S-machine interpreted by B1700 ``` is more efficient than a single system when more than one application area is considered. It is even more efficient than conventional design for many individual application areas, such as sorting. To visualize the architectural advantage of implementing the S-machine concept, imagine a two-dimensional continuum of machine designs, as in Figures 1 and 2. Designs which are optimally suited to specific applications are represented by bullets (•) beside the application's name. The goodness-of-fit of a particular machine design, which is represented as a point (X) in the continuum, to various applications is given by its distance from the optimum for each application; the shorter the distance, the better the fit and the more efficient the machine is. Figure 1 dramatizes the disadvantage of using one design for COBOL, FORTRAN, Emulation, and Operating System applications. Figure 2 pictures the advantage of Emulating/Interpreting many S-machines, each designed for a specific application. Note that Emulation inefficiencies must be counted once for each S-machine, since they are all interpreted. Figure 1. Typical machine design positioned by goodness-of-fit to application areas. Figure 2. Typical B1700 S-machines positioned by goodness-of-fit to application areas. 4. HARDWARE REQUIREMENTS Varying the processor's image for each application area implies very specific hardware requirements. 4.1 Defined-Field Capability All information is represented by fields, which are (recursively) defined to be either bit strings or strings of fields; i.e., bytes and words do not exist. (a) All memory is addressable to the bit. (b) All field lengths are expressable to the bit. (c) Memory access hardware must fetch and store one or more bits from any location with equal facility. That is, there must be no penalty and no premium attached to location or length. (d) All structured logic elements in the processor can be used iteratively and fractionally under microprogram control, thus effectively concealing their structure from the user. Iterative use is required for operands which contain more bits than a functional unit can hold; fractional use is required for smaller operands. 4.2 Generalized Language Interpretation (a) The system should be capable of efficient interpretation of a variety of instruction formats. (b) Format of interpreted instructions should not be predetermined or limited. The structure of the system should not cause any significant difference in efficiency due to selection of format. (c) Interpretation should be by soft microprogram. (d) Microprograms should be changeable, stored and executed in main memory. Switching through external memory should be invisible to the microprogrammer. (e) Hardware must assist with the concurrent execution of many interpreters, to make switching as rapid as possible. Microprograms should be technique and recursively usable. Microprogram execution is a critical part of the B1700. Some of the objectives involved in the design include: fast entry and exit of microprocedures, compactness of code and economy of storage, and ease of writing and maintaining microcode. Microprograms must not be limited in size. Execution of microprogram should be invisible to the user and not reflect any variation in microprogram efficiency due to size. Microfunctions must implement all present and foreseeable higher-level language functions efficiently but without prejudicing implementation of languages. Any function included solely for a single language should be in addition to basic microfunctions which could more generally implement the function. While the hardware requirements for defined-field design and generalized language interpretation have been stated so as to allow a varying processor image from microinstruction to microinstruction, they do not preclude taking advantage of a static processor image. For example, the number of bits to be read, written, or swapped between processor and memory can be different in consecutive microinstructions, but if an interpreted S-machine's memory accesses are of uniform length, this length can be factored out of the interpreter, simplifying its code. In other words, S-memory may be addressed by any convenient scheme; bit addresses are available, but not obligatory for the S-machine. With these hardware advances, language-dependent features such as operand length, are unbound inside the processor and memory buss, except during portions of selected microinstructions. Some of these features have, until now, been bound before manufacture, by machine designers. Language designers and users have been able to influence their binding only indirectly, and only on the next system to be built. On the B1700, the delayed binding of these features, delayed down to the clock pulse level of the machine, gives language designers and users a new degree of flexibility to exploit. 4.3 Advanced Design On each newly designed system, professional responsibility dictates that previously proven advances in system organization be incorporated. 4.3.1 Virtual Memory (a) S-programs should not be limited in size; all address fields should be variable. (b) S-programs should not reflect the storage organization of the system. (c) Programmers should be given feedback on the size and make-up of their working-sets. 4.3.2 Stack Organization (a) Programs should be recursive and reentrant. (b) Subprogram entries and exits should be very fast, to encourage decomposition of programming tasks into small, comprehensible units. (c) Compilation and execution efficiency should not be dependent on a manufacturer's ability to solve register allocation problems. 4.3.3 Dynamic System Configuration (a) Code should be independent of system configuration, to allow addition and deletion of processors, memory modules, I/O channels, and peripherals while programs are running. (b) The system itself (not the user) should be responsible for full resource utilization; hence code should not have to change when the system is reconfigured. 4.3.4 Multiprogramming (a) The system should run as many jobs at once as are necessary, subject to working-set limits, to keep each resource fully utilized. (b) Code should be independent of the number of jobs in the mix, to provide equal efficiency when running alone as when running with 100 others. (c) Memory must be protected from all invalid references (read or write). (d) Hard interrupts are needed to manage asynchronous events with minimum overhead. 4.3.5 Multiprocessing (a) The system should allow extra processors to be added to memory-rich or peripheral-rich installations, to balance the system to user workloads. (b) Program state should be maintained outside of processors, to allow execution by any processor without excessive switching time. 4.3.6 Descriptor-organized I/O (a) System I/O is itself a unique application, deserving its own S-language and interpreter. (b) The I/O S-language interpreter may be soft micro-program or a separate, wired processor, incapable of interpreting other S-languages. (c) With self-describing I/O requests, processor participation in I/O is reduced, improving the system's ability to keep many peripherals in operation simultaneously. 4.3.7 Soft Interrupts (a) Asynchronous and infrequent events should not require explicit code for their individual detection. 4.3.8 Profile statistics (a) Program behavior should be analyzed and reported back to the programmer, or filed for overall system analysis. (b) Reports should be in terms of source language. (c) Reporting which parts consume the most execution time is of primary importance. (d) Compilers writers ought to be told how their languages are being used. (e) By instrumenting soft microprogram interpreters for profile statistics gathering, overhead can be kept under 1%. Each processor may have either 1-8 I/O channels or 1-4 microprogram memory modules of 16,384 bits each. Each system may have 2-256 systems memory modules of 65,536 bits each. Figure 3. B1700 Organization Available peripherals include: Card Readers: 300-1400 cpm models, 80 col.; 300-1000 cpm models, 96 col.; Card Punch, 300 cpm, 80 col.; Card Read-Punch-Print-Sort, 1000/1000/120/170 cpm, 96 col.; Card Record-Read-Punch-Print-Sort, 300/60/60 cpm, 96 col.; Line Printers 1-900 lpm models, 132 col.; Paper Tape Peripherals; Disk Storage 5ms-40ms models; Head-Track 2200-4400 bpi models, movable arm, cartridge; Magnetic Tape 7/9 Track, NRZI/PE models; Sorter-Reader 600-1625 dpm models; Data Communications 150Hz-48KHz+models; Graphics; Terminal Computers, Teller Machines, etc. 5. SYSTEM ORGANIZATION Extreme modularity improves the B1700's ability to adapt to an installation's requirements. There may be 1-8 processors connected to one another as to 2-256 65,536-bit main memory modules, interfaced by one or more field-isolation units, described later. Each processor also connects to 1-8 I/O channels or to 1-4 microprogram memory modules. (See Figure 3.) With only one processor, the port interchange may be eliminated, as in Figure 4. [Diagram of system organization with labels: 300 cpm 96-col. MFCU, 300 lpm 132 col. printer, dual spindle, 20 ms. disk, control, channel, processor, fiu, S-memory.] Figure 4. One of the smallest B1700's. Rental of the system in Figure 4 is expected to be under $1500/month. 6. EMULATION VEHICLE Any computer which can handle the B1700's port-to-port message discipline may employ a B1700 for on-line emulation. (See Figure 5.) ![Diagram of Emulation Vehicle] Figure 5. B1700 as an Emulation Vehicle. Programs and data are sent to the B1700 for execution; I/O requests are sent back to the host which uses its own peripherals for them. Interpreters are loaded via the B1700's console cassette drive. Present interface specifications expect one interpreter to be in M-memory at a time. Rental of the system in Figure 5 is expected to be under $2200/month. 7. DEFINED-FIELD DESIGN 7.1 Characteristics The mechanisms by which the processor and microinstructions automatically handle variable operand lengths and formats have come to be called "bias" facilities. In addition to a normal complement of registers, functional units, and data paths, each processor has operation: CR1 which specifies operand length from one bit up to as many as the processor can handle, and CR2 which specifies the unit of information, viz., bit, BCD, USASCII byte, EBCDIC byte, along with some open codes for future use. Length-dependent operations such as add and subtract are controlled by the microprogram to determine when carries occur out of the apparently highest bit position. Memory-accessing microinstructions may choose to access only as many bits as CPL specifies. Format-dependent operations may reference CPU and I/O bus combinatorial circuits such as add and subtract for selecting binary or decimal results. Thus, microprogram sequences which load CPL and CPU from data descriptors (or even from the computer operator's keyboard) during an instruction interpretation behave correctly whether binary or decimal operands are supplied and whatever size the operands are. The microprograms are invariant to actual operand details, so the B1700 hardware appears not to have inherent structure. Iteration over operands which are longer than the processor can handle is accomplished by two unique microinstructions, Bias and Count F. Operands are normally referenced by field-definition registers called F and S. These have 24-bit subregisters, FA and SFA, for the data's bit address, 16-bit subregisters, FL and SPL, for the data's length (bit strings are thus limited to 65,535 bits), and 4-bit sub-registers, FU and SFU, for format information. The Bias instruction computes the format and number of bits to bring to the processor for an iteration by setting CPU from FU or SFU and by setting CPL to FL, SPL, itself, a literal value, or the minimum of any of these. The Count F micro uses CPL or a literal value by which to increment and decrement FA and FL; this indexes through an operand by defining contiguous subfields on which the processor may operate. To handle indefinitely long operands, then, one first writes a microprogram which assumes that the processor registers are indefinitely long. One suffixes the program with a Branch micro to repeat the code. One prefaces the program with a Count F and Bias pair, the Count F to define suboperands that are small enough to fit in the physical processor, and the Bias to compute the suboperand length (so that the operand need not be a multiple of the processor width), to load the bias registers, to test for completion, and to bypass the program when the long operand is completely processed. Such a microsequence can handle zero-bit to 65,535-bit operands indiscriminately. 7.2 Bit-oriented Memory To implement bit-oriented memory at low cost, one uses conventional memories and a memory-requestor interface which is called a field isolation unit, or F.I.U. (see Figure 6). The F.I.U's tasks are to convert bit addresses, field lengths, and field directions (i.e., address refers to most- or least-significant-bit) into conventional addresses, to align requestor bit strings with actual memory containers, and to mask off nonparticipating bits. During memory operations, bytes are read out of memory into the Memory Information Register, MIR, a bit string is extracted or inserted as the information passes through the rotator into either buffer, and then the buffer is gated to its destination. Figure 6. Field Isolation Unit (Memory/Requestor Interface) Any port on the interchange (see Figure 3) may read bit strings from memory. The port must supply a bit location, a string length (in bits), and a field direction (one bit meaning forward or reverse) to the FIU, which places the information into its Memory Address Register. Because fields may be manipulated in either direction, locations actually refer to between-bit positions, as illustrated in Figure 7. The thirteen bits from location 13 forward are the same thirteen as accessed from location 26 backward. This simplifies microprogramming by naming bit strings in a manner similar to the way in which we think about them. ![Diagram of bit strings and locations](Figure 7. Defined-field addresses labelling between-bit positions.) Given a field location, the FIU calculates which byte in the conventional memory contains the leading bit. On the B1700, there are four 9-bit memories (one bit of which is used for parity). Consequently, the low three bits of an address specify a bit position within a word, the next two bits specify one of the four memories, and the high 19 bits constitute a conventional address. The leading bit is in the word specified by the high 21 address bits. This word, and the corresponding words in the other memories, are brought to the FIU's Read MIR. For example, the accessed words which satisfy a request for the 13-bit field beginning at location 13, forward, are shaded in Figure 8. The field itself is doubly shaded. From the low three bits and direction of a request, the FIU is told how many positions and in what direction to rotate the received field in order to right-justify it. The request length is used to create a mask which zeros the unneeded high bits. The isolated field is then sent to the port interchange, as shown in Figure 9. Request: Location-- 0000000000000000000000 01 101; Length-- 00000000000001101; Direction-- 0; <table> <thead> <tr> <th>Bit Address</th> <th>Word</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> </tr> <tr> <td>1</td> <td>16</td> </tr> <tr> <td>2</td> <td>24</td> </tr> <tr> <td>3</td> <td>32</td> </tr> </tbody> </table> mem 0 mem 1 mem 2 mem 3 (Parity bits not shown) Figure 8. Words (shaded cells) delivered to FIU satisfying request for 13 bits beginning with location 13, forward. Figure 9. Read field isolate. On the B1700, the field is available at a processor's port 668 nsec after the read request is initiated, assuming the port interchange and memory are both free. The port and FIU are involved during the last 500 nsec. To write bit strings into memory, the FIU accepts a request (i.e., location, length, direction) and performs a read cycle to bring the receiving memory field to the Read MIR and right-justified through the rotator. It then accepts a bit string from the port interchange which goes through the Write MIR and is masked into the memory field at the rotator. Then the rotator returns the information to word-alignment, and passes it through to the write buffer, where it returns to memory. Write cycles involve ports for 500 nsec and the FIU for 1169 nsec. Since writes are performed as Read-Modify-Write, the addressed field, as it was before modification, is available in the read buffer for the requesting port. Thus, fields may be swapped between memory and a processor by one microinstruction. It is also possible to force two back-to-back requests, which gives any processor the ability to test and set, and restore a bit string in one uninterruptable operation. This is vital for administering multiprocessor, multiprogramming environments because it can be used to prevent system deadlocks. Because the amount of information which resides with the processor is so large (registers plus M-memory), one processor uses only 20% of the available S-memory cycles, which enables one FIU to support several processors. In addition, almost 80% of processor requests are reads. Of the 41 bits required to specify a memory request, typically 5-10 come from the S-instruction. The rest come from other S-machine state. They represent the context within which the S-machine is working. The length field, for example, constantly is 16 when the S-machine is the IBM 1130. Consequently, little work is needed to maintain such large address fields. 8. SOFT INTERPRETATION All Burroughs-supplied interpreters rely on the B1700's Master Control Program (MCP) for all input/output, virtual S-memory, virtual M-memory, multiprogramming and multiprocessing of user programs, multiprogramming and multiprocessing of interpreters, and standard functions. The MCP is written entirely in a higher-level language (a synergism of features from COBOL, PL/I, and XPL) which is interpreted itself. To provide for smooth change of control, a microprogrammed interpreter interfacing routine, named Gismo, exists in the beginning memory locations. By loading a pointer to another program into a processor register and clearing the microprogram address register, each interpreter transfers to Gismo. Gismo uses the program pointer to establish processor context within the new interpreter. Subsequent microinstructions are taken from the new interpreter. 8.1 Interpreter-Machine Interface Given this interpreter interface, simple, hardware-oriented tasks such as interrupt decoding and priority resolution, M-memory overlay, and I/O transfer, can be included in the interface routine, simplifying the requirements of an interpreter. An interpreter has control of a processor until interrupted by another element in the system or by programmatic interrupt. Between each S-instruction interpretation (approximately every 35 usec.), every interpreter must examine the processor's interrupt register to detect the need for change of control. If an interrupt is present, the interpreter calls (instead of transfers to) Gismo, leaving a constant in a register which directs Gismo to decode the interrupt. For simple functions such as timer interval or I/O transfer, Gismo actually performs the required actions. For other interrupts, Gismo returns an appropriate description to the calling interpreter. If the interpreter can handle the interrupt, it does so and continues with the next S-instruction if all interrupts are quiet. If it cannot, it moves its S-machine's state outside of the processor, loads a constant which means "call MCP", and transfers to Gismo. Very little state needs to be saved because no S-instruction is in the middle of interpretation. Real-time interrupts are distinguished by hardware, but are handled in exactly the same way. When Gismo is told to "call MCP", it may select a non-MCP program to process real-time interrupts (such as a COBOL routine which performs a pocket select for a document-sorter). Gismo is also called by the MCP to move interpreter segments into M-memory, by all programs to initiate I/O, and by the computer operator to perform some start-up or post-mortem utility functions. 8.1.1 S-machine Switching Note that change of control is between S-machines, which does not necessarily mean a different interpreter is needed. (When all user programs are written in the MCP's language, only one interpreter is active.) Each job is represented in S-memory according to Figure 10. All but one segment is read-only code. The one is called the "run structure" and consists of: I/O buffers; data; descriptions of devices and operands; and the run structure nucleus which contains the job's S-machine state and MCP-needed control information. All segments (except the run structure nucleus) move into memory under control of the MCP. The data section may or may not be administered internally by a virtual memory discipline. Note that code segments never are written out of memory because they never change. The space they occupy is always available for other uses. The MCP's run structure includes an interpreter dictionary, each entry of which describes an interpreter (either active in S-memory or on disk). <table> <thead> <tr> <th>Overlayable data segments</th> </tr> </thead> <tbody> <tr> <td>S-machine.state.(run structure)</td> </tr> <tr> <td>Data definitions</td> </tr> <tr> <td>File definitions</td> </tr> <tr> <td>File buffers</td> </tr> </tbody> </table> | Overlayable program segments | Figure 10. Program components. To reinstate a user's interpreter, the MCP extracts from the user's S-machine state the name of the interpreter being used. The interpreter name is looked up in the interpreter dictionary to yield a pointer to the interpreter code in S-memory. The MCP's interpreter then saves its S-machine state, loads the pointer into a hard register, and resets the Microprogram Base Register and Microprogram Address Register (to leave the MCP's interpreter's code space). The next micro is brought from Gismo, which uses the hard register to load the Microprogram Base Register, transferring microinstruction fetches to the new interpreter's code space. Associating S-machines and interpreters symbolically allows such things as several COBOL interpreters active in one mix—one designed for speed, another for code compaction, etc.,--all employing the same S-language expressly designed for COBOL (that is, a COBOL-machine definition). To switch back to the MCP interpreter, a user interpreter obtains a pointer to the MCP's run structure from the user program's run structure and performs the identical procedure. Interpreter switching is independent of any execution considerations. It may be performed between any two S-instructions, even without switching S-instruction streams. That is, an S-program may direct its interpreter to summon another interpreter for itself. This facility is useful for changing between tracing and non-tracing interpreters during debugging. Interpreter switching is also independent of M-memory. The Microprogram Address Register always actually addresses S-memory. In case M is present, special hardware diverts fetches to it, whenever the Microprogram Limit Register indicates that M's contents mirror the portion of S-memory being addressed. Without M, no fetches are diverted, and Gismo sits in low S-memory. 8.2 Interpreter Management Entries in the interpreter dictionary are added whenever a job is initiated which requests a new interpreter. Interpreters usually reside on disk, but may be read in from tape, cards, cassettes, data comm, or other media. They have the same status in the system that object code files, source language files, data files, compiler files, and MCP files all share: symbolically-named, media-independent bit strings. While active, a copy is brought from disk, to be available in main memory for direct execution. The location may change during interpretation due to virtual S-memory management, so microinstructions are location-independent. At each job initiation and termination, the MCP rearranges M-memory for the processor being readied according to five strategies: (a) Abundant M Condition: All active interpreters can fit in M. Action: Place all interpreters into M. (b) Ample M Condition: All active interpreters can be granted at least their minimum M request (usually 1000 words = 1000 16-bit micros). Action: Divide M evenly and place part of each interpreter in M. (c) Adequate M Condition: Several interpreters can be given their minimums, but not all. Action: Give the MCP's interpreter about 1000 words; divide the rest into 1000-word blocks and swap all user interpreters in and out during reinstate operations. (d) Precious M Condition: Only two interpreters can be given their minimums. Action: Give the MCP its minimum; swap all users in and out of the rest. (e) Bare M Condition: Only one interpreter can be given a minimum demand. Action: At each interpreter switch, place one interpreter into all of M. Interpreter profile statistics show that 1000 micros (1000 words) account for over 99% of all instructions executed, even though most interpreters are 2000 words long. If a microprogrammer is prudent enough to rearrange his code according to usage, then an interpreter requesting 1000 words of M as a minimum may be as efficient as one requesting 2000 words. 8.3 Ease of Microprogramming Writing microprograms for the B1700 is as simple, and in some ways simpler, than writing FORTRAN subroutines: (a) Microprograms consist of short, imperative English sentences and narrative comments. For example, one subroutine in the FORTRAN interpreter reads as follows: * Decimal to binary conversion * Source: addressed by F; 1-13 digits * Destination: L Y, initially zero Decimal-to-binary Read 8 bits to T counting FA up and FL down obtain a char and address the next one strip off zone bits Move T to X Call Add-X-to-LY If FL=0 then exit Move L to X Call Multiply-XY-by-10 Move L to Y Move T to L Go to Decimal-to-binary (b) Knowledge of microinstruction forms is not beneficial. Although microprogrammers on other machines need to know which bits do what, on the B1700, there is no way to use that information. Once the function is given in English, its representation is immaterial. The B1700 microprogrammer has only one set of formats to worry about: those belonging to the S-language which he is interpreting. (c) Multiprogramming of microprograms is purely an MCP function, carried out without the microprogrammer's knowledge or assistance. Actually, there is nothing one would do differently, depending on whether or not other interpreters are running simultaneously. (d) Use of M-memory is purely an MCP function; the resident interpreter interface alone can move information in and out of M. Other than rearranging one's interpreter according to usage, there is nothing one should microprogram differently depending on whether microinstructions are executing out of M-memory or S-memory. Maximizing use of system resources is beyond the scope of any individual program; responsibility lies solely with the MCP and the machine designers. (e) Since all references are made symbolically, protection is easy to assure. Microprograms can reference only what they can name, and they can only name quantities belonging to themselves and their S-machines. Moreover, names cannot be artificially generated, as they can in FORTRAN (e.g., by negative subscripts, or by call-by-value parameters used in call-by-reference constructs). (f) Calling out interpreters is simplified by the continuation of Burroughs' "one-card-of-free-form-English" philosophy of job control language. Figure 11 shows the control information which creates a new interpreter (1) from cards, and (2) from a disk file named XCOBOL/SOURCE. (1) \texttt{COMPILE XCOBOL/INTERP WITH MIL; DATA CARD} (2) \texttt{COMPILE XCOBOL/INTERP WITH MIL; MIL FILE CARD = XCOBOL/SOURCE} \textbf{Figure 11.} Typical MCP control information for creating interpreters. (g) Association of interpreters and S-language files occurs at run-time. Figure 12 shows the control information which executes a COBOL program named FILE/UPDATE with (1) the usual COBOL interpreter, and (2) another interpreter named XCOBOL/INTERPRETER. (1) EXECUTE FILE/UPDATE (2) EXECUTE FILE/UPDATE; INTERP = XCOBOL/INTERPRETER Figure 12. Typical MCP control information for executing programs. (h) There is no limit to the number of interpreters that may be in the system (except that no more than $2^{44}$ bits are capable of being managed by the B1700's present virtual memory property, so a 28,000-bit average interpreter length means there is a practical limit of 628,292,362 interpreters... many more than the number of S-languages in the world). 9. VIRTUAL MEMORY On the B1700, S-language addresses may be 44 bits long (disregarding the possibility of using magnetic tape as a backup for main memory), even though each system will have 16 million bits of S-memory or less. M-addresses are 24 bits long, even though each system will have 65,536 bits of M-memory or less. Virtual memory is the mechanism by which the B1700 accommodates memory references to more bits than are physically attachable. 9.1 Virtual M-memory As control is switched to an interpreter, two registers are set by the M-memory manager: the Microprogram Base Register, which contains the memory address of the first interpreter microinstruction, and the Microprogram Limit Register, which contains the relative position of end to the start of the interpreter in M-memory. Micros are always addressed relative to the start of an interpreter. When a reference is made which exceeds the Base Register, the relative address is added to the base register and an M-memory fetch is initiated. This is not a true virtual memory scheme since early interpreter locations are always in M-memory, and later locations never are. It does have the necessary property, however, that the actual amount of M-memory never impacts the program. Interpreters execute identical sequences of instructions for a given task as the amount of M-memory varies. The microprogrammer never takes cognizance of the actual amount of M-memory that is present. (He should, of course, arrange his interpreter with the most often used parts first.) 9.2 Virtual S-memory The B1700 MCP maintains a large disk area for program pieces that are not in use and can be kept outside of main memory. These pieces may be any number of bits long (up to $2^{24}$). Whether they are segments, pages, arrays, or otherwise, depends on the S-machine from whose environment they came. All Burroughs S-machines designed for specific programming languages (e.g., COBOL, FORTRAN, RPG, BASIC, ALGOL) make references through descriptors, or interpretable pointers. These descriptors define areas within an S-machine's data or code space; references are relative to the start of the described area. The descriptions themselves are relative to the start of the S-machine's space, or to an MCP back-up area. When a reference selects an area which is not in memory, the MCP initiates a disk request and temporarily runs another job. When the absent area is brought in, its descriptor is changed to indicate the new location. The reference is retried when the job is selected again. S-machines are free to manage their own data and code spaces, with and without the MCP's assistance. Within an S-machine, only its own data and code spaces are accessible. Each machine environment is represented as if it began in location zero and extended throughout all memory, possibly up to $2^{24}$ (over 17 trillion) bits' worth. The B1700 monitors all references to main memory to verify that they lie between locations contained in the hardware's Base and Limit registers which are set by the MCP during reinstating. Illegal references cause a hard interrupt which transfers control from a user interpreter back to the MCP's interpreter, preventing meddling in other S-machine's areas. 10. STACK ORGANIZATION Other hardware makes S-machine stacks easy to implement. The ability to read and write in both directions in S-memory, and to count field definition registers' subfields up and down independently give microprograms read-and-pop and write-and-push operations which can be carried out by one microinstruction. 11. DYNAMIC SYSTEM CONFIGURATION Since no information (other than a few MCP tables) contains absolute memory locations, the presence or absence of particular memory modules is irrelevant. Should one fail, it may be taken off-line and repaired without disturbing the rest of the system. Conversely, when memory modules are added to the system, the B1700's location-independent segments can be placed in them as soon as they are brought on-line. No code in the MCP nor in any user program need ever be revised due to memory reconfiguration. Likewise, the identity of any processor, I/O channel, or peripheral is not represented in any user program or MCP routine. When particular devices are needed, their identity is looked up using a symbolic reference each time a device is accessed. Consequently, devices may leave the system without inhibiting any program from running (unless the MCP is not able to create a pseudodevice to hold the I/O requests until the real device is available again). Further, devices may enter the system and be fully utilized without changing any user or MCP code. 12. MULTIPROGRAMMING A misunderstood concept, multiprogramming refers to the interlaced processing of independent programs using as much of an entire system's resources as are required. It is usually confused with partitioning, which is the interlaced execution of independent programs using part of a system's resources. Under multiprogramming, two three-tape sorts which use 24K of core can be run together on a three-tape, 24K system (the MCP must utilize three pseudo-tapes); a partitioned system needs six tapes or 48K or both. A simple way to implement multiprogramming is to represent each program in main memory exclusively; that is, no state information or temporary results are kept in a processor...everything is available in memory. On the B1700, this is true of each S-machine. To interlace processing of each program, imagine directing the processor to execute exactly one instruction from each program, in round-robin fashion. Since everything necessary for any instruction's execution is represented in memory, no difficulties are encountered in passing from one program to the next. Such an approach, however, denies the efficiencies which can be obtained by successive instruction executions in one processor. The B1700 takes an intermediate approach which is made feasible by its descriptor-organized virtual memory scheme. Since not assistance, either for I/O or virtual memory management, is its state is recorded in memory and another task is taken up. It is a happy discovery that such breakpoints occur often enough to keep the MCP attentive to the needs of many jobs. This scheme is further refined by allowing programs to become dormant, which permits their state information to be taken out of main memory. Subsequent I/O operations on which dormant programs may be waiting carry some form of identification which ties them to the dormant program, and causes program state information to be brought back. 13. MULTIPROCESSING Multiprocessing is the concurrent execution of more than one processor on independent programs. The processor-independent program state, which is used to keep multiprogramming simple, automatically allows any program to be resumed by any processor, as long as each processor can address all of memory, communicate with the entire I/O system, and has access to the interpreter named in the program's run structure (see Figure 10). All of these conditions are always true of B1700 processors. 14. DESCRIPTOR-ORGANIZED I/O Because I/O processing is often not directly dependent on subsequent program steps, greater throughput can be achieved by overlapping I/O processing with other processor activity. So, to reduce B1700 processor involvement with I/O, requests take the form of descriptors (interpretable programs) whose effect, when interpreted, is the I/O function. To activate a request, a processor sends a port-to-port message across the port interchange. The message locates an I/O descriptor in main memory which an I/O processor will interpret. Non-I/O processors are thus relieved of the intricacies of device and channel communication. On one-processor B1700 systems, the only I/O descriptors which are fabricated are self-evident to the device controls themselves. Each contains a literal device name and an opcode for the device, as well as the addresses to be used for information transfer in and out of memory. On multi-processor B1700 systems, the opportunity to create arbitrary descriptors is present, enabling file-oriented activities, such as record accessing, searching, and sorting, to be carried out in the I/O realm. In addition, new device disciplines can be accommodated by new microcode for the I/O processors. 15. SYSTEM PERFORMANCE MONITORING 15.1 Profile Statistics Whereas hardware receives extensive and penetrating scrutiny while it is being designed, software is normally constructed with only the programmer's intuitions about its efficiency to guide its design. The performance measurement technique of profile statistics, the association of code usage with a program's source language, has been reported to help improve a program's running time by a factor of two to ten. (See Darden and Heller, or Knuth "Profile".) To help B1700 users obtain the greatest throughput per dollar, each Burroughs interpreter can gather profile statistics about a program which it is interpreting and present them at the end of a run. At compile time, a user may indicate which portions of his program are to receive more or less scrutiny. These indications define a set of program segments whose usage is to be recorded by means of an inserted S-language monitoring command. The compiled program consists of: the code segments expanded with monitors (by less than 1%), textual information which will be used to describe the participating program segments in terms of source language, and an array of cells for the frequency counts. Interpretation of the monitors appears to extend execution times by less than ½%. After execution, the weighted frequency counts show which program segments account for most of the running time. Reprogramming these critical segments for efficiency will reduce running times the most. Microprogramming can easily allow dynamic measurements of other properties as well, with similarly small overhead. 15.2 "Monitor" Microinstruction One microinstruction, Monitor, simply presents a user-specified bit pattern at designated pins in the processor backplane and frontplane. This allows unique software "events" to be identified by external hardware, which greatly simplifies the task of knowing what the system is doing. Each higher-level language has been extended to include a construct which generates a Monitor S-instruction for each interpreter to carry into an identical microinstruction. Event flagging is thus available to all programmers. 16. EVALUATION The B1700's ability to provide profile statistics at negligible cost voids all known system performance measures. Consider benchmarks, which measure more system parameters than any other technique. Any benchmark program which runs on the B1700 develops not only an observed running time, but also an indication of how to reduce that time (often by more than 50%). What, then, is the true performance on the system? Not the observed time, because inefficiencies are pin-pointed. Half the time? Not until the benchmark has been changed. The point of benchmarks is to have a standard reference which allows the customer to characterize his work and obtain a cost/performance measure. What customer would be satisfied with an inefficient characterization? If the B1700 can show that a program is not using the system well, what good is it as a benchmark? If we change the program to remove the inefficiencies, it is no longer standard. This is a pernicious dilemma. Even the simplest measure, add time, still published as if it hasn't been a misleading and unreliable indicator for the past 15 years, is void. What is the relative performance of two machines, one of which can do an almost infinite variety of additions and the other of which can do only one or two? The B1700 can add two 0-24 bit binary or decimal numbers in 187 nsec; how fast must a 16-bit binary machine be in order to have an equivalent add time? Assuming reasonable benchmark figures are obtainable, they would say nothing about the intrinsic value of a machine which can execute another machine's operators, for both existing and imaginary computers; which can interpret any current and presently conceivable programming language; which can always accept one more job into the mix; which can add on one more peripheral and one more memory module, to grow with the user; which can interpret one more application-tailored S-machine; which can tell a programmer where his program is least efficient; which can continue operation in spite of failures in processing, memory, and I/O modules. These characteristics of the B1700, shared by few other machines--no machine shares them all--save time and money, but are not yet part of any performance measurement. Despite the nullification of measures with which we are familiar and the gargantuan challenge of measuring the B1700's advancements of the state-of-the-art, there are, nevertheless, some quantifiable signs that the system gives more performance than comparably-priced and higher-priced equipment. 16.1 Utilization of Memory Defined-field design's major benefit is that information can be represented in natural containers and formats. Applied to language interpretation, defined-field architecture allows S-language definitions which are more efficient in terms of memory utilization than machine architectures which have word- or byte-oriented architecture. For example, short addresses may be encoded in short fields, and long addresses in long fields (assuming the interpreter for the language is programmed to decode the different sizes.). Alternatively, address field size may be a run-time parameter determined during compilation. That is, programs with fewer than 256 variables may be encoded into an S-language that uses eight-bit data address fields. Even the fastest microcode that can be written to interpret address fields is able to use a dynamic variable to determine the size of the field to be interpreted. Just how efficient this makes S-languages is difficult to say because no standard exists. What criterion will tell us how well a given computer represents programs? What "standard" size does any particular program have? We would like a measure that takes a program’s semantics into account, not just a statistical measure such as entropy. If we simply ask how much memory is devoted to representing the object code for a set of programs, we find the following statistics: <table> <thead> <tr> <th>Language of Sample</th> <th>Aggregate Size on B1700</th> <th>Aggregate Size on Other System</th> <th>Other System</th> <th>% Improved B1700 Utilization</th> </tr> </thead> <tbody> <tr> <td>FORTRAN</td> <td>280KB</td> <td>560KB</td> <td>System/360</td> <td>50%</td> </tr> <tr> <td>FORTRAN</td> <td>280KB</td> <td>450KB</td> <td>B3500</td> <td>40%</td> </tr> <tr> <td>COBOL</td> <td>450KB</td> <td>1200KB</td> <td>B3500</td> <td>60%</td> </tr> <tr> <td>COBOL</td> <td>450KB</td> <td>1490KB</td> <td>System/360</td> <td>70%</td> </tr> <tr> <td>RPG II</td> <td>150KB</td> <td>310KB</td> <td>System/3</td> <td>50%</td> </tr> </tbody> </table> In short, the B1700 appears to require less than half the memory needed by byte-oriented systems to represent programs. Comparisons with word-oriented systems are even more favorable. As to memory utilization, the advantage of the B1700 is even more apparent. Consider two systems with 32KB (bytes) of main memory, one a System/3, the other a B1700. Suppose a 4KB RPG II program is running on each. If we ask how much main memory is in use, we find: <table> <thead> <tr> <th>System</th> <th>Bytes in Use</th> <th>%</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>System/3</td> <td>32K</td> <td>100</td> <td>28K is idle without multi-programming and virtual memory.</td> </tr> <tr> <td>B1700</td> <td>1K</td> <td>3</td> <td>Assumes 500B run structure and 500B of program and data segments.</td> </tr> </tbody> </table> In other words, the utilization at any given moment may be 30 times better on the B1700 than on the System/3. At least, with all program segments in core, it is seven times better (4.5KB vs. 32KB). Even if we assume that the RPG interpreter is in main memory and is not shared by other RPG jobs in the mix, the comparison varies from 6:1 to 4:1, 5KB to 8KB (vs. 32KB), 84% to 75% better utilization. As more and more RPG jobs become active in the mix, the effect of the interpreter diminishes, but then comparison becomes meaningless, because other low-cost systems cannot handle so large a mix. (Note that these figures change when a different main memory size is considered, so the comparison is more an illustration of the advantage of the B1700's variable-length segments and virtual memory than of its memory utilization.) 16.2 Running Time Although program running time is said to involve less annual cost at installations than the unquantifiable parameter which we may call "ease of use", let us mention some current observations. When the B1700 interprets an RPG II program, the average S-instruction time is about 35 microseconds, compared to System/3's 6 microsecond average instruction time. On a processor-limited application (specifically, calculating prime numbers), the identical RPG program runs in 25 seconds on a B1700 and 208 seconds on a System/3 model 10. Both systems had enough main memory to contain the complete program; only the memory and processor were used. The particular configurations leased for $3500 (B1700) vs. $2000 (System/3). In terms of cost, the B1700 run consumed 30c while the System/3 run took $1.60. In terms of instruction executions, the B1700 was 50 times faster. That is, each individual interpreted RPG instruction, on the average, contributed as much to the final solution as 50 System/3 machine instructions. When one considers that RPG is the only programming language on the System/3, it is incredible that System/3 seems so poorly equipped to run RPG programs. It is even more incredible because the B1700 really has no S-language expressly for RPG; it uses the COBOL S-language instead. The likelihood of an S-machine more than 50 times more efficient than System/3 is almost certain. This seems to support the B1700 philosophy, that interpretation of S-machines for each application is more efficient than using a general-purpose architecture. Using another set of benchmark programs (for banking applications), and another B1700 which leases for $2000, throughput comparisons are again astounding. On the one hand, we are comparing a defined-field, soft-interpreting, soft-I/O-processing machine using pre-release compilers, interpreters, and MCP routines, under multiprogramming, multiprocessing, virtual memory systems design, against, on the other hand, a byte-oriented, hard-wired system with two years' field testing, five software releases, batch-processing, one cpu, and 32K maximum main memory. Despite all of the B1700 features, which supposedly trade speed for flexibility, the B1700 executes RPG programs in 50% to 75% of the System/3 time, and compiles them in 110% of the System/3 time, for the same monthly rental. In applications of this type, compilation is expected annually (monthly at worst) while execution is expected daily. (Systems used for this comparison included a multi-function card unit to read, print, and punch 96-column cards, a 132-position 300 lpm printer, a dual spindle 4400 bpi disk cartridge drive, and operator keyboard. The System/3 could read cards at 500 cpm, while the B1700 could read at 300 cpm.) 17. CONCLUSION Microprogramming, firmware, user-defined operators, and special-purpose minicomputers are being touted as effective ways to increase throughput on specific applications while decreasing hardware costs. Standard system modules may be tailored to an installation's needs. Effective as these approaches are, they are all held back by procrustean machine architecture. Burroughs B1700 appears to eliminate inherent structure by its defined-field and soft interpretation implementation. Both are advancements of the state-of-the-art. Now one machine can execute every machine language well, eliminating nearly all conversion costs. One machine can interpret every programming language well, reducing problem-solving time and expense. The B1700 does not waste time or memory overcoming its own physical characteristics; it works directly on the problems. Furthermore, these innovations are available in low-cost systems that yield better price/performance ratios than conventional machinery. 18. ACKNOWLEDGEMENT Many of the design objectives were first articulated by R. S. Barton [BARTON]. The author wishes to thank Brian Randell, R. R. Johnson, Rod Bunker, Dean Earnest, and Harvey Bingham for their conscientious criticism of various drafts of this article. 19. BIBLIOGRAPHY BARTON B5000 EDVAC PROFILE
{"Source-Url": "http://bitsavers.org/pdf/burroughs/B1700/Wilner_B1700designImp_May72.pdf", "len_cl100k_base": 12466, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 35433, "total-output-tokens": 14263, "length": "2e13", "weborganizer": {"__label__adult": 0.0006747245788574219, "__label__art_design": 0.0015878677368164062, "__label__crime_law": 0.0005655288696289062, "__label__education_jobs": 0.0016689300537109375, "__label__entertainment": 0.00018286705017089844, "__label__fashion_beauty": 0.0003731250762939453, "__label__finance_business": 0.00087738037109375, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0014677047729492188, "__label__hardware": 0.10736083984375, "__label__health": 0.0006184577941894531, "__label__history": 0.000858306884765625, "__label__home_hobbies": 0.0004596710205078125, "__label__industrial": 0.0029926300048828125, "__label__literature": 0.0004472732543945313, "__label__politics": 0.0004243850708007813, "__label__religion": 0.00096893310546875, "__label__science_tech": 0.2861328125, "__label__social_life": 6.99758529663086e-05, "__label__software": 0.0157012939453125, "__label__software_dev": 0.57373046875, "__label__sports_fitness": 0.0004906654357910156, "__label__transportation": 0.001720428466796875, "__label__travel": 0.0003209114074707031}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59168, 0.0394]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59168, 0.57675]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59168, 0.91786]], "google_gemma-3-12b-it_contains_pii": [[0, 1872, false], [1872, 4289, null], [4289, 6854, null], [6854, 9249, null], [9249, 9422, null], [9422, 11001, null], [11001, 13222, null], [13222, 14592, null], [14592, 16014, null], [16014, 16796, null], [16796, 17538, null], [17538, 18670, null], [18670, 21720, null], [21720, 21780, null], [21780, 23567, null], [23567, 24009, null], [24009, 26857, null], [26857, 29511, null], [29511, 31345, null], [31345, 33230, null], [33230, 34794, null], [34794, 36659, null], [36659, 38703, null], [38703, 40740, null], [40740, 43250, null], [43250, 45810, null], [45810, 48207, null], [48207, 50973, null], [50973, 53868, null], [53868, 56957, null], [56957, 59168, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1872, true], [1872, 4289, null], [4289, 6854, null], [6854, 9249, null], [9249, 9422, null], [9422, 11001, null], [11001, 13222, null], [13222, 14592, null], [14592, 16014, null], [16014, 16796, null], [16796, 17538, null], [17538, 18670, null], [18670, 21720, null], [21720, 21780, null], [21780, 23567, null], [23567, 24009, null], [24009, 26857, null], [26857, 29511, null], [29511, 31345, null], [31345, 33230, null], [33230, 34794, null], [34794, 36659, null], [36659, 38703, null], [38703, 40740, null], [40740, 43250, null], [43250, 45810, null], [45810, 48207, null], [48207, 50973, null], [50973, 53868, null], [53868, 56957, null], [56957, 59168, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59168, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59168, null]], "pdf_page_numbers": [[0, 1872, 1], [1872, 4289, 2], [4289, 6854, 3], [6854, 9249, 4], [9249, 9422, 5], [9422, 11001, 6], [11001, 13222, 7], [13222, 14592, 8], [14592, 16014, 9], [16014, 16796, 10], [16796, 17538, 11], [17538, 18670, 12], [18670, 21720, 13], [21720, 21780, 14], [21780, 23567, 15], [23567, 24009, 16], [24009, 26857, 17], [26857, 29511, 18], [29511, 31345, 19], [31345, 33230, 20], [33230, 34794, 21], [34794, 36659, 22], [36659, 38703, 23], [38703, 40740, 24], [40740, 43250, 25], [43250, 45810, 26], [45810, 48207, 27], [48207, 50973, 28], [50973, 53868, 29], [53868, 56957, 30], [56957, 59168, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59168, 0.07668]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
676374499f82d0131ef7ae6dc3a9ed7dff1a73d6
Abstract—Efforts to improve diversity in computing have mostly focused on K-12 and university student populations, so there is a lack of research on how to provide these benefits to adults who are not in school. To address this knowledge gap, we present a case study of how a nine-member team of end-user programmers designed an educational program to bring job-relevant computing skills to adult populations that have traditionally not been reached by existing efforts. This team conceived, implemented, and delivered Cloud Based Data Science (CBDS), a data science course designed for adults in their local community in historically marginalized groups that are underrepresented in computing fields. Notably, nobody on the course development team was a full-time educator or software engineer. To reduce the amount of time and cost required to launch their program, they repurposed end-user programming skills and tools from their professions, such as data-analytic programming and reproducible scientific research workflows. This case study demonstrates how the spirit of end-user programming can be a vehicle to drive social change through grassroots efforts. Index Terms—diversity in computing, end-user programming, data science I. INTRODUCTION A widely-acknowledged deficit in computing fields is the lack of historically underrepresented groups on teams that build software and make engineering decisions [1], [2]. This deficit of perspectives is especially impactful considering how algorithmic-driven decision-making has become a fundamental part of modern life. Algorithms determine who is approved for bank loans [3], which job applications are considered for job openings [4], and what plan of care certain patients end up receiving [5]. Data-driven systems have also re-enforced racial biases [2] and denied essential social services to members of historically marginalized groups [6]. The reasons for these failings are multifaceted, but they are compounded when contributions from the people who would be most affected are excluded from the system design process [2], [6]. In recent years both researchers and community organizations have addressed these issues by diversifying the population of students who choose to study computing. For instance, academic projects such as Storytelling Alice [7], Scratch [8], and App Inventor [9] have fostered more inclusive programming communities at the K-12 level, especially in middle and high schools. Nonprofits such as Code2040 [10], Girls Who Code [11], and Black Girls Code [12] strive to improve both gender and racial diversity amongst K-12 students interested in computing. At the university level, research-backed curricula such as media computation [13] and diverse paths into computing [14], [15] have made advances in the proportion of computing majors from underrepresented groups. In addition, college scholarships and mentoring programs have helped retain such students as they advance through school. However, the majority of such efforts target K-12 and university students, so there is a lack of knowledge about how to provide these benefits to adults who are not in school. To address this gap, this paper presents a case study of a team of academic research scientists who partnered with a local community organization to teach data science to adults living in a high-poverty area of a large U.S. city. The typical student in this program is an adult member of an underrepresented and marginalized group who did not complete high school; they may have grown up in foster care or may have experienced extended periods of unemployment or homelessness. The program’s goal is to equip these adults with basic data science skills required to get entry-level jobs doing tasks such as spreadsheet data entry, data cleaning, wrangling, and validation. These types of data-oriented jobs offer an on-ramp into computing careers while being more within their reach than full-time software developer positions, which require much more extensive training. To implement this grassroots initiative on a short time frame with a small budget, the team had to perform end-user programming [16] to repurpose existing tools from their research workflows and to create new ad-hoc tools to support course development. Specifically, they developed text-based programmatic workflows based on R Markdown computational notebooks [17] that they use in their research lab. Our study is the first, to our knowledge, to analyze how a team of end-user programmers (i.e., academic research scientists) applied the philosophy of end-user programming (i.e., repurposing/building software tools for personal use) to diversify end-user programming (i.e., data science) education. II. RELATED WORK Our study extends and brings together prior work in two main areas: end-user programming and diversity in computing. A. End-User Programming End-user programming is commonly defined as the act of programming as a means to personal ends rather than for producing software artifacts for widespread public use [16], [18], [19]. This definition encompasses a wide variety of personas, ranging from professional software engineers writing throwaway prototype code to teachers writing spreadsheet macros to track grades. (Many prefer to use the term “end-user programming” to focus on the activity [16], but for notational simplicity we will use “end-user programmer” to refer to people who frequently perform end-user programming.) In this paper we study academic research scientists who perform end-user programming by writing code in the R language. Prior work has studied how scientists code for their research in a range of settings, including high-performance computing [20] and across the physical and social sciences [21–23]. In recent years, groups have studied how researchers use Jupyter notebooks [24–26] in their end-user programming workflows. However, whereas prior work has focused on scientists writing code to support their own research, our study is unique in showing how they repurpose those skills to create an ad-hoc educational platform outside of their core research. Besides being end-user programmers, our study participants are also creating an educational initiative to train future end-user programmers: data scientists who use code as a means to produce data-driven insights. Longstanding efforts such as Software Carpentry [27] and Data Carpentry [28] have trained data scientists in academia through volunteer-run university workshops. We have previously surveyed a broad range of data science teaching programs across universities, coding bootcamps, and industry settings [29]. While many of these efforts strive to provide an inclusive and welcoming environment, they still mainly target graduate students and working professionals who often already have advanced degrees. In contrast, the team that we study is working with a local community organization to provide free data science training to adults in underrepresented groups who often did not even finish high school. We know of no prior academic research on end-user programming education being extended to underserved populations like the one that we study. B. Diversity in Computing Diversity in computing has been a longstanding interest in computing education research, and it is also this year’s VL/HCC special emphasis topic [30]. To our knowledge, the majority of efforts around this topic have been for students in K-12 and university settings. In contrast, we study a program to teach computing to adults who are not in school settings. At the K-12 level (elementary, middle, and high schools), researchers have developed domain-specific programming environments to broaden interest in computing amongst traditionally underrepresented groups. For instance, Storytelling Alice [7] focused on engaging female middle school students, and Scratch [8] was deployed to after-school programs to foster computing interest amongst low-income African American and Latinx youths from 8 to 18 years old [31]. Beyond programming, the Glitch Game Tester project [32–34] hired African American high school students as game testers, which sparked their interest in computing careers. Project Rise Up 4 CS [35] used in-person mentorship and financial incentives to encourage African American high school students to succeed on the annual AP Computer Science exam. At the university level, research-based diversity initiatives have focused on two fronts: curriculum design and activities inside the classroom. On the curricular front, alternative pathways into computer science [14], more flexible threads of courses for different interests [15], and service learning opportunities [36] have improved diversity in computing majors. Within the classroom, pair programming, peer instruction [37], [38], and media computation activities [13], [39] have improved retention for students from underrepresented groups. In contrast to K-12 and university initiatives, we study the development of a free computing education program targeted at adults who do not have access to formal schooling. Prior research on adult learning of computing has studied how working adults take online courses [40], how older adults over 60 years old [41] learn to code on their own, and how end-user programmers learn to code on the job [42–44]. However, these adults often already have higher education and plentiful access to technology. Despite these differences in learner demographics, the educational program that we study addresses some of the same challenges of adult education that prior work found, most notably lack of time given other life responsibilities and feelings of isolation due to lack of in-person support. Finally, our study is unique in showing how a team attempted to address diversity in computing by using the tools of end-user programming at their disposal to develop an adult data science education program. III. METHODS We performed a case study of the development process of Cloud Based Data Science (CBDS), a free online course described in Table I. The goal of CBDS is to teach basic data science skills using spreadsheets and the R language in order to prepare students to obtain jobs as entry-level data scientists. In essence, it is training students to become end-user programmers who write code as a means to an end to clean data and produce analysis outputs. For this case study we interviewed everyone involved in creating CBDS: eight research scientists at a large U.S. university and the project’s program administrator, who was a research administrator in their lab. None of the nine team members’ full-time jobs were to create educational programs or to write software; CBDS was a voluntary effort. The first author conducted all interviews (each lasting 45 to 60 minutes) using video conferencing software. The interviews were semi-structured with questions focusing on the motivations each team member had for working on this project, their use and development of software tools during the project, and how these tools affected their interactions with other team members. Interview questions included: - How did you first get involved in CBDS? - What was your role in developing CBDS? - What existing tools have you used for educational content development? - What was your level of expertise with these particular tools before CBDS? (if they mentioned specific tools) - Did you have to build any of your own software tools to develop CBDS? If so, which ones? - How were development tasks distributed throughout the team? - How did you coordinate work between team members? The first author took notes and recorded verbatim quotations during every interview. After all interviews completed, the research team (two members) iteratively categorized them into major themes using an inductive analysis approach [45]. A. Study Design Limitations This project was a case study of a specific team of academics at a U.S. university who attempted to develop a nontraditional educational program. Thus, we do not have large-scale replicable data and cannot claim that the CBDS team’s experiences generalize to other related efforts. Also, we are relying solely on interviews and did not perform an ethnography to observe the team when CBDS was being developed. Note that CBDS is still under active development, so many of the details are fresh on participants’ minds. Since CBDS is still in its early stages, having enrolled only around a dozen students so far, it is too early to tell the long-term outcomes of this program in terms of sustainability and impacts on its alumni. We also did not have direct access to the students and thus cannot report on their experiences. This study focuses solely on the CBDS development team. IV. CBDS Goals: Diversify End-User Programming We report our case study’s findings by first detailing the goals of CBDS and the motivations of its volunteer development team. Then we describe their end-user programming activities throughout project development. <table> <thead> <tr> <th>Module</th> <th>Subject</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Introduction to the CBDS Program</td> </tr> <tr> <td>2</td> <td>How to use Your Chromebook Laptop</td> </tr> <tr> <td>3</td> <td>How to use Web Applications</td> </tr> <tr> <td>4</td> <td>Organizing a Data Science Project</td> </tr> <tr> <td>5</td> <td>The Command-Line and Version Control</td> </tr> <tr> <td>6</td> <td>R Programming</td> </tr> <tr> <td>7</td> <td>Data Wrangling</td> </tr> <tr> <td>8</td> <td>Data Visualization</td> </tr> <tr> <td>9</td> <td>Connecting to Data Sources</td> </tr> <tr> <td>10</td> <td>Data Analysis</td> </tr> <tr> <td>11</td> <td>Communicating Analysis Results</td> </tr> <tr> <td>12</td> <td>Getting a Data Science Job</td> </tr> </tbody> </table> Table I shows the curriculum of CBDS, a self-paced online course that anyone can take for free. However, being free and online is nowhere near sufficient for ensuring that it is accessible to many members of underrepresented groups. Over the past decade of research into MOOCs (Massive Open Online Courses), a widely-acknowledged finding is the notable lack of diversity in who takes and benefits from them: MOOC students are mostly white or Asian males with at least college- or graduate-level degrees [46]–[48]. Interview participants P1 and P3 saw this lack of diversity firsthand since they had prior experience creating data science MOOCs. P1 explained his motivation for starting CBDS: “Why aren’t certain groups of people using our existing MOOCs? Maybe they didn’t have access to hardware, they lacked prerequisite knowledge, or they were just unaware that data science was a thing.” P4 mentioned that existing courses assume prior educational experiences that exclude people without access to such opportunities: “The problem with data science programs is that the material is pretty advanced. They’re geared towards master’s students.” The CBDS team believed that with a more accessible curriculum and personalized teaching approach, they could bring data science to a group that has not traditionally been reached by MOOCs. Specifically, P1’s goal was to target students with a 10th-grade level of math literacy. The team also augmented CBDS with in-person support to help members of underrepresented groups enroll, remain in, and successfully complete the course. First they worked with a local community organization to recruit potential students. To reach its target audience, the CBDS team partnered with the Historic East Baltimore Community Action Coalition (HEBCAC) [49], a nonprofit that specifically serves the historically disenfranchised low-income neighborhoods surrounding the university where the team works. HEBCAC serves a community where many residents did not complete high school, grew up in foster care, or experienced extended periods of joblessness or homelessness. HEBCAC steps in to help them complete their GED diploma (the equivalent of a U.S. high school degree), place them in jobs, or help them arrange further educational opportunities such as community college. The majority of people served by HEBCAC are African American, Hispanic, and Latinx adults. HEBCAC was a critical bridge between potential students and the CBDS team. Otherwise these students would not likely know about the existence of data science as a career path that could be within their reach. Once students enroll, they are given a free Chromebook laptop (detailed in Section VI-A) and the opportunity to meet in-person with volunteer tutors twice per week during 90-minute office hours; P2, P7, and P8 served as tutors. Students can also ask online questions to course staff in a private Slack chat channel. Finally, to encourage retention in the program, students are paid a modest stipend for successfully completing each module in Table I; this stipend is designed to be comparable to the wage they would earn from working in the kinds of jobs that HEBAC normally helps them obtain. Once students finish the course, the CBDS team and HEBAC work with them to do resume and interview preparation and to refer them to entry-level data science jobs in the area. V. Motivations of CBDS Development Team Not only was CBDS's goal to train new end-user programmers (i.e., data scientists), its course development team also consisted of end-user programmers. Table II shows team members' backgrounds. Everyone on the team works in the same life science research group at a large U.S. research university. P1–P6 all had several years of end-user programming experience, in the form of using bioinformatics pipelines and programming as part of statistical data analysis for their research. P8 and P9 had limited programming experience before working on CBDS, while P7 had no programming experience before joining. None of the team members have a degree in computer science or experience doing professional software development. All are cisgendered (5 female, 4 male). Why was this team motivated to create CBDS when their primary job was as research scientists? Their workplace is located in the same neighborhood served by HEBAC, an area that has been historically disenfranchised. Decades of societal inequity has led to increased rates of poverty, which everyone on the team sees around them. Thus, all team members reported their primary motivation as wanting to create opportunities for adults in the surrounding neighborhood who could not normally afford to pay the tuition for a traditional education like that offered at their university. P7 was closest to the target student community. She does most of the administrative work for CBDS and serves as a volunteer tutor for it. She grew up near the area served by HEBAC, so she was very motivated to see people from her community succeed in this program: “My personal excitement about joining in the first place was to help my people.” Besides growing up in the area, P7 felt that she was also able to relate to the students because she had only recently started learning how to code: “I really appreciate my position in the program because I believe I am the least experienced staff member in terms of programming. So I experience the same frustrations and joys when a program crashes or when my graph turns out how I thought it would.” Also, as the only member of the team who was not on a Ph.D.-oriented research career path, she felt that students could be more honest and open with her: “I definitely think it was a good idea to have somebody on the team who they weren’t intellectually intimidated by.” VI. End-User Repurposing of Existing Tools Because CBDS was developed by a team of volunteer non-specialists, they needed to engage in a variety of end-user programming activities to make this program work with relatively little time and money. The first set of activities we describe here, while not "programming" per se, invoked the spirit of end-user programming by repurposing existing hardware and software to develop a data science curriculum. A. Repurposing Low-Cost Chromebook Hardware Keeping costs as low as possible was a major concern for the team. P1 described how prohibitively expensive it would be to build CBDS as an official university course or MOOC: “In a traditional college setting if you had assembled several faculty to build this program it would have cost millions of dollars. We did not have that!” Cost minimization was not just a concern in terms of development, but it was critical to the team’s mission to make data science education available to underserved members of their local community. One initial obstacle they faced was simply making the technology required for doing data analytic work available to students. The students that they wanted to reach typically did not even own personal computers, or their computers were too old to install modern data science tools on: “We wanted to reduce the cost of the hardware you need to get started. For low income folks these small costs are insurmountable.” (P3) The team’s solution was to provide a Chromebook laptop for free to every student in the program. Many Chromebooks <table> <thead> <tr> <th>ID</th> <th>Gender</th> <th>Field</th> <th>Job Title</th> <th>End-User Programming Experience</th> <th>Created Course Content?</th> <th>In-Person Tutor?</th> </tr> </thead> <tbody> <tr> <td>P1</td> <td>M</td> <td>Biostatistics</td> <td>Research Lab P.I.</td> <td>&gt; 5 years</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>P2</td> <td>F</td> <td>Genetics</td> <td>Postdoc</td> <td>1 – 5 years</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>P3</td> <td>M</td> <td>Biostatistics</td> <td>Research Scientist</td> <td>&gt; 5 years</td> <td>No</td> <td>No</td> </tr> <tr> <td>P4</td> <td>M</td> <td>Biostatistics</td> <td>Research Scientist</td> <td>&gt; 5 years</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>P5</td> <td>F</td> <td>Biostatistics</td> <td>Research Scientist</td> <td>&gt; 5 years</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>P6</td> <td>F</td> <td>Biostatistics</td> <td>Ph.D. Student</td> <td>1 – 5 years</td> <td>Yes</td> <td>No</td> </tr> <tr> <td>P7</td> <td>F</td> <td>Liberal Arts</td> <td>Administrative Staff</td> <td>none</td> <td>No</td> <td>Yes</td> </tr> <tr> <td>P8</td> <td>M</td> <td>Economics</td> <td>Postdoc</td> <td>&lt; 1 year</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>P9</td> <td>F</td> <td>Genetics</td> <td>Ph.D. Student</td> <td>&lt; 1 year</td> <td>Yes</td> <td>No</td> </tr> </tbody> </table> TABLE II Backgrounds of the Nine Members of the CBDS Team that We Interviewed, Along with Their Prior End-User Programming Experience and Whether They Created Course Content or Served as In-Person Tutors. The goal was to minimize tool setup for getting started [29]: installation, which prior work found to be a major barrier to based IDE. This web-centric setup also helped students start a free data science run-time environment for the R language with powerful hardware, students used RStudio Cloud [50], geared for web applications. Instead of relying on a computer sometimes available to check out for free from public libraries. With the first dozen students. In addition, Chromebooks are constraints of the seed funding provided to launch the program now cost less than $300, which is affordable compared to the typical hardware that data scientists use and well within the constraints of the seed funding provided to launch the program with the first dozen students. In addition, Chromebooks are sometimes available to check out for free from public libraries. Each Chromebook runs Chrome OS, an operating system geared for web applications. Instead of relying on a computer with powerful hardware, students used RStudio Cloud [50], a free data science run-time environment for the R language that is hosted on a web server and accessed via a browser-based IDE. This web-centric setup also helped students start coding in their browser without the frustrations of software installation, which prior work found to be a major barrier to getting started [29]: “The goal was to minimize tool setup for students. Everything had to be done in the browser.” (P2) In short, the CBDS team repurposed Chromebooks, which have traditionally been used for casual web browsing, as end-user programming tools for aspiring data scientists. B. Repurposing Tools for Open and Reproducible Science As academic researchers who practice open and reproducible science [51], [52], the CBDS team were adept users of computational notebooks – especially R Markdown [17] – to do end-user programming for their research. R Markdown allows users to write prose in the lightweight Markdown format, while interleaving graphs, diagrams, and runnable code in several programming languages such as R and Python. Users can write R Markdown in a text editor or in RStudio Cloud, which renders it as a notebook-like interface similar to Jupyter [53]. Since these are text documents, changes can easily be tracked in version control systems like Git. To create lessons for CBDS, the team repurposed this computational notebook – one of the central tools in their daily scientific workflow – to become the substrate for the educational content they built. A related example of repurposing was the fact that years of the team’s data analysis code and documentation were already in R Markdown, so they could be curated, simplified, and re-used for teaching. For instance, P9 took software documentation she had already written for internal lab use and adapted it into course content: “The fact that all of the content is plain text makes those changes super easy.” Another example of repurposing was led by P1 and P3, who had both developed MOOCs before. MOOC providers like Coursera offer an in-browser rich-text editor where instructors are supposed to write lessons and assignments for their course. Compared to the team’s usual open science workflows, which take advantage of the command line, Git, and other programmatic tools, the team felt hindered by the compulsory use of manually-driven web-based GUIs. P1 explained: “The way the Coursera platform is set up, which isn’t as simple as ‘push to GitHub,’ it makes [content updates] difficult.” Thus, to better integrate the end-user programming workflows they already used for scientific research into their desired teaching workflow, the team partnered with online publisher Leanpub [54] to develop a new course platform. Leanpub is currently a platform for taking Markdown-formatted documents and compiling them into eBooks. P1 and P3 already had experience publishing eBooks there, and they shared Leanpub’s ethos of working with Markdown files. The team also valued Leanpub’s pricing philosophy and applied it to CBDS: content on Leanpub follows a pay-what-you-want pricing model, which enables people to get it for free if desired. The team worked directly with Leanpub, which built them a custom web application where they can upload R Markdown files and have them render as course webpages and assessments. This web app recognized custom Markdown syntax for elements such as multiple-choice questions and programming assignments with test cases. P3 appreciated how it was compatible with their existing text-based workflow: “Leanpub catered very much to the idea of ‘text to course.’ Assessments as plain text was a very important feature.” Using this custom platform, seven of nine team members (P1, P2, P4, P5, P6, P8, P9) developed technical course content solely in R Markdown files, tracked changes using Git, and collaborated on developing course modules (Table I) using GitHub across 25 different repositories. Course material development began in February 2018, and the first in-person cohort for CBDS started at the end of May 2018. The team credited the ability to repurpose their research workflows as critical for launching this initial version in just three months. Significant updating of materials continued as the first cohort made their way through the program, as changes were made based on student feedback. Lastly, the CBDS team also had to grapple with teaching modern data science software libraries that were continuously updating and changing their APIs. When they previously used MOOC platforms like Coursera, whenever a library or API changed, they would need to spend hours navigating menus and GUIs to modify relevant course content that mentioned that library. This manual workflow was antithetical to the team’s practice of reproducible research [52], [55], where all of the figures, tables, and reported statistics in a data analysis can be automatically re-compiled with one command whenever the underlying dataset is updated. Working with Markdown course materials allowed the team to use command-line tools to find and appropriately replace outdated content, similar to how they would update an outdated statistic or graph in light of updated input data while they were doing research. Plain-text data formats, coupled with command-line tools and Leanpub’s platform, enabled the CBDS team to take an end-user programming approach to course development instead of relying on manually-driven GUI-based content management portals typically used for online courses. VII. END-USER PROGRAMMING TO BUILD NEW TOOLS Seven team members (P1–P6, P8) had previously developed bespoke R-language packages for performing domain-specific analysis tasks or for sharing algorithms and data from their published research studies. Besides repurposing the tools of computational science to programmatically generate online course materials, these team members also engaged in end-user programming to build custom tools for themselves. Here we detail two such tools: Didactr for validating course content and Ari for expediting video production. A. Didactr: Custom Software to Validate Course Content The team developed various R packages to help them create, check, and deploy the data science lessons that went into CBDS. One of those packages, called Didactr, allowed team members to automatically validate lessons to make sure their Markdown was structured correctly before being uploaded to Leanpub. Lessons comprised two types of files: lecture videos that explained course concepts (see next section on Ari), and R Markdown files containing lesson readings, example code, and assessments to practice after each lesson. Didactr parses these files using heuristics to make sure they are formatted to display properly on Leanpub’s online course platform. Compiling the lessons and checking for errors locally with Didactr was faster and provided more useful error messages compared to uploading an error-laden lesson to Leanpub and manually checking on the web: “Compiling courses on Leanpub takes time. Didactr allowed me to preempt errors that I would get on Leanpub so I could fix them locally and shorten the correctness-evaluation loop.” (P2) Ultimately Didactr served as a “command center” package which allowed the CBDS team to test and track the dozens of rapidly-evolving content files that constituted the course. In the spirit of end-user programming for one’s own needs, Didactr’s features were built piecemeal in response to recurring bottlenecks the team faced while making course content. For instance, P3 worked closely with several other team members to better understand tasks they were initially doing manually: “I asked the content creators, ‘Show me what you do’ and tried to then find APIs that would allow us to automate as much as possible.” And P2 recalled how closely she worked with teammates to extend Didactr on-the-fly: “When I would tell [P3] there was a feature I wanted, he would sit with me and build it right in front of me.” B. Ari: Custom Software for Text-Based Video Production Other than writing lessons and assignments, the most time-consuming part for the CBDS team was recording and editing lecture videos. Videos in CBDS often feature an instructor giving a real-time demo of writing and running code or showing how to think through a data analysis task. If the API for a function in the featured analysis changes, then significant portions of the video must be re-recorded and re-edited. This problem is particularly pronounced in fields like data science, where industry-standard libraries are rapidly evolving. P1 and P3 remembered how costly it was to re-record videos for their past MOOCs whenever the code they were teaching had their APIs updated: “With content that changes so often it’s not feasible to re-shoot videos, re-edit, et cetera, every time an API changes” (P3). To make it easier to create and update these code- and slide-based lecture videos, the CBDS team developed a custom R package called Ari that allowed them to automatically generate narrated lecture videos from the R Markdown documents they were already writing as part of their course materials. To use Ari, a creator first passes in a set of lecture slides and an accompanying text narration script. Ari uses Amazon’s text-to-speech web API [56] to synthesize a machine voice to speak out the script and FFmpeg [57] to stitch together the lecture slides and spoken audio into a final compiled video file with the proper timings. Lecture slides can be generated from R Markdown files, but often team members opted to use more traditional GUI-based presentation tools like Google Slides. Ari helped the CBDS team take an end-user programming approach to video production, turning it into a process of editing text and compiling it into videos with R scripts instead of manually recording and editing using heavyweight video production GUI software. To make bug fixes or updates to their videos, they can simply edit text files and recompile. This format also allows them to easily track video edits in Git version control. P3 discussed how Ari’s workflow aligned with the team’s research philosophy: “The videos are fully reproducible [from text-based sources], just like our scientific work.” P3 elaborated that having this more modular format meant they could iterate more quickly: “This allows us to only modify content without changing presentation, delivery, et cetera. So we can take a much more experimental approach to making lecture videos.” VIII. TOWARD END-USER SOFTWARE ENGINEERING Figure 1 shows the team’s current course production pipeline where textual R Markdown files (*.Rmd extension) get compiled into videos (*.mp4 extension), lesson webpages, and assessments, validated with Didactr, and then released online to Leanpub’s web platform. The CBDS team’s goal was to create a free data science course for members of their local community, not to create a production-scale course development platform. However, to create such a course as a volunteer effort on a short time frame (3 months from inception to launch), they had to repurpose end-user programming skills from their research careers to create tools to make themselves more productive. But now that the course is in progress, the team found themselves transitioning from end-user programming into end-user software engineering [16], [18] with issues of tool maintenance, robustness, and updates from new contributors on their minds, especially as some of the original team members depart. For instance, P1 was concerned about the ease with which future CBDS team members could interact with both the tools and course content: “I know the original builders of the program will graduate soon, so lots of knowledge about maintaining the program will leave. This fact informed all of the technology decisions.” As our study was being conducted, P2 finished her postdoc and moved to a new institution, and P1 explained the extent to which her departure was already testing the robustness of their tools: “We have already done lots of maintenance and restructuring and our system is working. Team members who have then left are still regularly fixing bugs, which shows how easy the material is to maintain.” A related concern was the extent to which tools could enable future maintenance and expansion of course content. P1 said the motivation behind building tools in the first place was the question: “How can we make (course) maintenance costs as asymptotically close to zero as possible?” But now the tools themselves need to be maintained and updated as well. IX. Discussion We conclude by reflecting on our findings in light of implications for future end-user programming and computing diversity research, end-user programming for education and social good, and the paradox of scale and access to education. A. Implications for Future Research This case study presents only a single snapshot, but we believe it can open the doors to future research on the interplay between end-user programming and diversity in computing. For instance, there is at least an order of magnitude more end-user programmers than professional software developers [58], [59], and they likely come from more diverse demographics than those who specialized in computing fields. Thus, one of the most practical and scalable ways to further broaden diversity in computing is to channel the energy of end-user programmers. How can institutions that employ such programmers foster these kinds of initiatives without making them too bureaucratic and thus undermining their bottom-up grassroots spirit? Can lessons from these volunteer-run efforts inspire new practices for designing collaboratively-constructed educational experiences? Switching gears, how can systems researchers develop tools to better support the extensibility of end-user programming environments to stretch far beyond their original intended uses? In our case study, the CBDS team repurposed the R language ecosystem, which was originally designed for statistical research, to build an online course development platform. While experts in educational technology could probably come up with a “better” toolchain, the fact is that this is the toolchain these research scientists already know well, so tools should meet them where they are. But must every ecosystem reinvent the same wheels in an ad-hoc non-reusable manner? Or are there more general principles for constructing modern software platforms that we can abstract out into language-agnostic tools that developers can plug into whether they are working in R or Python or JavaScript or even spreadsheet environments? B. DevOps Patterns in End-User Programming for Education Reflecting on our nine interviews in this case study, one recurring theme was how much technical infrastructure was involved behind the scenes to keep CBDS running. It reminded us of how the past decade saw the emergence of DevOps [60]–[62], a practice that combines software development with the operations required to deliver and maintain that software. In industry, DevOps engineers write custom code to monitor the lifecycle of software products (especially web applications) throughout development, deployment, testing, and release. In a similar vein, the CBDS team are not only producing educational content like faculty normally do, but they are also writing custom software to manage the lifecycle of that content. In essence, they are mirroring the patterns in DevOps while performing end-user programming for education. Like DevOps engineers, the CBDS team has significant influence on the design of their program (developing content), the programming tasks involved in delivering their “product” (building software to support that content), and monitoring its status (interacting with students to see where they get confused). Every CBDS team member can both make observations about what parts of their system (Figure 1) need to be improved and are empowered to make those improvements. Also like DevOps engineers, the CBDS team repurposed or built custom software tools for each stage of the course lifecycle. They used the same tools that they would normally use to do their research to create course modules, deployed those modules on GitHub so other team members could collaboratively build upon them, developed their own monitoring software to test whether modules were formatted correctly, and released online and iterated based on student feedback. If not for their knowledge of appropriate tools to cobble together, they would likely not have been able to deliver CBDS on top of their normal duties as researchers. That said, some team members like P3 had concerns with the technical challenges of continued maintenance and scaling, given their multifaceted job roles: “How can we be expected to be scientists, security experts, system administrators, and good instructors?” More broadly, we believe that treating educational artifacts like software artifacts by borrowing patterns from fields like DevOps could become a promising strategy as the demand for computing education grows in the coming years. C. End-User Programming for Social Good Another unique aspect of the CBDS project was how end-user programming was applied for broader social good. This project originated from the team’s desire to create data science education opportunities for an underserved adult population that would not otherwise encounter an on-ramp into computing careers. Although the CBDS team was composed mostly of academic data scientists, it was not their data- or research-related skills that allowed them to build CBDS; rather it was their ability to design a code-based workflow that enabled rapid collaborative iteration on their course materials via end-user programming, software engineering, and DevOps. The speed and relatively low cost with which CBDS was launched opens up the question: Who is in the best position to create such opportunities for underrepresented minorities to enter computing fields? Many existing diversity efforts have their origins at the top levels of organizations, whether it is CEOs diversifying hiring practices or nonprofits offering scholarships. This top-down approach, though impactful, is often not closely connected to the communities which these opportunities are designed for. Conversely, there are thousands of bottom-up local organizations working to help historically disenfranchised communities on the ground, but they are often not aware of paths into viable computing careers, especially in newer professions like data science. CBDS took more of a bottom-up approach: The team was not highly-positioned within the organizational structure of their university, and they relied on a partnership with the HEBCAC community organization to recruit interested students from the local area. This case study points to the compatibility between a grassroots vision for positive social change and the spirit of end-user programming. CBDS was developed such that all team members could contribute to not just course materials but also to the custom software tools they developed for their own workflow. The flat structure and transparency within the team meant that they could address their students’ concerns more quickly. Although the success of the program has yet to be determined, it may be good for top-down decision makers to rethink who should be empowered to help foster greater diversity in computing fields. Simply using free software or putting free course materials online is not enough to create lasting change in terms of who has access to computing education. It appears there was no single tool or innovation by the CBDS team that allowed this program to come to fruition. However, this program could not have been built without end-user programming, which equipped each team member with the level of technical agility required to respond to the needs of their student population. We believe that the CBDS team’s ability to write bespoke software to manage their course production pipeline, combined with their proximity to the target student population in their neighborhood, made them well-positioned to deliver such an educational program. In contrast, a traditional university instructor would likely not be able to produce and support a complex technical course outside their normal teaching duties and would also not have funding to hire professional engineering staff to help them. On the other end, a MOOC provider such as Coursera or Udacity would likely not create such a “small” program due to lack of perceived market size and revenue potential; to our knowledge, no major MOOC provider has yet partnered with local community organizations to produce courses for underserved adult populations. D. Rethinking Scale and Access to Educational Opportunities One critique of CBDS might be, “How will this ever scale?” At present, it does not scale, since the CBDS team must staff the online Slack chat channels and in-person office hours themselves on top of their day jobs as researchers. The team has ideas for how to gradually scale, such as using alumni as volunteer tutors for subsequent cohorts and fundraising to buy more Chromebooks. However, we believe the fact that the team did not initially think about scale was what led to this program being developed in the first place. If they had thought about scale from the beginning, they would have likely created an ordinary MOOC like what P1 and P3 have done before. People who take MOOCs tend to already have higher incomes and higher levels of prior education; this is especially the case for courses that teach technical subjects such as computer science or data science [46]–[48]. It appears that if a course is designed upfront for reaching the largest possible audience in terms of enrollment, then it does not usually reach populations that have been historically excluded from educational opportunities. Thus, paradoxically, designing courses for scale might mean less access for those who are unlikely to find those resources on their own. In contrast, CBDS presents an alternative to online courses or university outreach programs. It takes an approach where free course materials are designed to scale online but are also formatted in a way so that course creators can iterate on them quickly. But it was the team’s willingness to do what does not scale – adapting to their students by partnering with the HEBCAC community organization – that enabled them to tailor the program continuously as new needs arose throughout deployment. Working with HEBACAC and the students face-to-face does not scale, but we believe this approach provides a path for greater access to computing opportunities. Lastly, CBDS points the way toward future hybrids of online and in-person education. One idea here for scaling is that paid versions of CBDS could partially fund in-person versions that target historically underrepresented groups. With this financial model, those who have had more access to educational opportunities can pay to enroll and thus indirectly fund those who have not had access to the same opportunities. Beyond financial sustainability, online courses can take advantage of their ability to scale by transforming their online communities into in-person communities of support: Online alumni could be recruited to help with in-person tutoring in underserved communities and also provide guidance and networking opportunities related to computing careers. X. Conclusion We presented a case study of how a team of academic scientists repurposed end-user programming skills and tools from their research to create an adult education program to cultivate diversity in computing. The team provided an easily-accessible learning environment with free Chromebook laptops, a web-based coding platform, and weekly in-person office hours and online help. They customized R Markdown computational notebooks to develop and publish course content. And they built custom tools such as those to validate lessons and to compile textual scripts into lecture videos. This study shows how the bottom-up grassroots spirit of end-user programming can advance social good. Hopefully both small teams and large organizations can repurpose these lessons for positive ends.
{"Source-Url": "http://www.pgbovine.net/publications/end-user-programmer-education-and-diversity_VLHCC-2019.pdf", "len_cl100k_base": 9624, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33193, "total-output-tokens": 9762, "length": "2e13", "weborganizer": {"__label__adult": 0.0007228851318359375, "__label__art_design": 0.0009131431579589844, "__label__crime_law": 0.0007510185241699219, "__label__education_jobs": 0.198486328125, "__label__entertainment": 0.00019693374633789065, "__label__fashion_beauty": 0.00042557716369628906, "__label__finance_business": 0.0015468597412109375, "__label__food_dining": 0.0009479522705078124, "__label__games": 0.0009741783142089844, "__label__hardware": 0.00154876708984375, "__label__health": 0.0014247894287109375, "__label__history": 0.0008206367492675781, "__label__home_hobbies": 0.0003974437713623047, "__label__industrial": 0.0010824203491210938, "__label__literature": 0.00081634521484375, "__label__politics": 0.0011425018310546875, "__label__religion": 0.0010051727294921875, "__label__science_tech": 0.055145263671875, "__label__social_life": 0.0009889602661132812, "__label__software": 0.0133514404296875, "__label__software_dev": 0.71484375, "__label__sports_fitness": 0.0005936622619628906, "__label__transportation": 0.0012655258178710938, "__label__travel": 0.000423431396484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47388, 0.00793]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47388, 0.53567]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47388, 0.96042]], "google_gemma-3-12b-it_contains_pii": [[0, 4729, false], [4729, 10125, null], [10125, 16074, null], [16074, 22031, null], [22031, 28906, null], [28906, 34590, null], [34590, 40904, null], [40904, 47388, null], [47388, 47388, null], [47388, 47388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4729, true], [4729, 10125, null], [10125, 16074, null], [16074, 22031, null], [22031, 28906, null], [28906, 34590, null], [34590, 40904, null], [40904, 47388, null], [47388, 47388, null], [47388, 47388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47388, null]], "pdf_page_numbers": [[0, 4729, 1], [4729, 10125, 2], [10125, 16074, 3], [16074, 22031, 4], [22031, 28906, 5], [28906, 34590, 6], [34590, 40904, 7], [40904, 47388, 8], [47388, 47388, 9], [47388, 47388, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47388, 0.19231]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
59c52cfcba457ad2fa85bbe09868f94123c1c499
Lightweight Interactive Proving inside an Automatic Program Verifier Sylvain Dailler, Claude Marché, Yannick Moy To cite this version: Sylvain Dailler, Claude Marché, Yannick Moy. Lightweight Interactive Proving inside an Automatic Program Verifier. 4th Workshop on Formal Integrated Development Environment, 2018, Oxford, United Kingdom. hal-01936302 HAL Id: hal-01936302 https://inria.hal.science/hal-01936302 Submitted on 27 Nov 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Lightweight Interactive Proving inside an Automatic Program Verifier∗ Sylvain Dailler Claude Marché Yannick Moy Inria, Université Paris-Saclay, F-91120 Palaiseau LRI, CNRS & Univ. Paris-Sud, F-91405 Orsay AdaCore, F-75009 Paris Among formal methods, the deductive verification approach allows establishing the strongest possible formal guarantees on critical software. The downside is the cost in terms of human effort required to design adequate formal specifications and to successfully discharge the required proof obligations. To popularize deductive verification in an industrial software development environment, it is essential to provide means to progressively transition from simple and automated approaches to deductive verification. The SPARK environment, for development of critical software written in Ada, goes towards this goal by providing automated tools for formally proving that some code fulfills the requirements expressed in Ada contracts. In a program verifier that makes use of automatic provers to discharge the proof obligations, a need for some additional user interaction with proof tasks shows up: either to help analyzing the reason of a proof failure or, ultimately, to discharge the verification conditions that are out-of-reach of state-of-the-art automatic provers. Adding interactive proof features in SPARK appears to be complicated by the fact that the proof toolchain makes use of the independent, intermediate verification tool Why3, which is generic enough to accept multiple front-ends for different input languages. This paper reports on our approach to extend Why3 with interactive proof features and also with a generic client-server infrastructure allowing integration of proof interaction into an external, front-end graphical user interface such as the one of SPARK. 1 Introduction For the development of software with high safety and security requirements, deductive program verification is an approach that provides access to the highest levels of guarantees. The functional requirements are expressed using formal specification languages. The conformance of the code with such specifications can be established in a fairly automated setting using automatic program verifiers available nowadays (such as Dafny, F*, KeY, KIV, OpenJML, Verifast, Viper, Why3, etc.). Such verification tools typically proceed by generating verification conditions (VC for short): mathematical formulas that need to be proven valid. Such VCs are typically discharged using automated solvers, such as the SMT solvers reasoning on Satisfiability Modulo Theories. A major issue preventing the diffusion of deductive verification in industrial applications is the cost in terms of human effort required to design adequate formal specifications and to successfully discharge the VCs. To leverage this issue, it is important, when no SMT solver is able to solve a given VC, to provide the user with means to investigate the proof failure: is it because the code needs to be fixed, is it because the program is not sufficiently annotated (e.g. a missing loop invariant), or is it because the VC is too complex to be discharged by automatic provers (e.g. an induction is needed). Among the possible means is the generation of counterexamples [9]. Such a feature appears to be useful in particular to fix trivial ∗Work partly supported by the Joint Laboratory ProofInUse (ANR-13-LAB3-0007, https://www.adacore.com/proofinuse) of the French national research organization © S. Dailler, C. Marché, Y. Moy This work is licensed under the Creative Commons Attribution-Noncommercial-Share Alike License. mistakes, but for complex cases, its limitations quickly show up: because of the intrinsic incompleteness of back-end solvers, or more pragmatically because solvers proof search is typically interrupted after a given time limit is reached, the counterexamples may be spurious or absent [9]. Moreover, there is no easy mean to distinguish a true bug from insufficiently detailed specifications (although there is on-going research in that direction [15]). This paper presents an approach that we designed in the context of the SPARK verifier for industrial development of safety-critical Ada code [7, 12]. The goal is to provide simplified interactions between the user and the failing VC, so as to investigate a proof task without the need to rely on an external interactive prover. A specificity of SPARK is that the underlying toolchain from the given input Ada program to the VCs makes use of the external intermediate language Why3 [6] that itself provides access to many different automated provers (mainly Alt-Ergo, CVC4 and Z3) but also general purpose interactive theorem provers (Coq, Isabelle/HOL, PVS). Indeed, an extreme mean to investigate a proof failure is to launch an interactive theorem prover on the failing VC and to start writing a manual proof. Such a process shows up useful, mainly because writing the detailed steps, in which the user believes the proof should work, typically helps to discover missing elements in the specifications, and in such cases fixing the annotations could finally help the SMT solvers to automatically discharge the VC. Also, an ultimate situation is to finish the proof completely using the underlying interactive prover [3]. The main drawback in this process is that users should be able to use the general-purpose back-end interactive prover, forcing them to learn a completely different environment, using its own syntax for formulas, and its specific proof tactics to discharge proof tasks. Another drawback is that once the user has switched to an external proof assistant to proceed with a proof, then there is no easy mean to get back to the common environment offered by Why3 to call other automatic provers on the sub-goals generated by interactive proof tactics. 1.1 Related Work The need for user interaction in the context of industrial use of deductive verification is not new, this issue was identified and taken into account early in the design of industrial tools. The KIV environment, used in large realistic case studies in academia and industry for more than 20 years, considered very early the importance of combining automated and interactive proving [2]. Other early tools that provide some form of interactive proving are the industrial tools, largely used in railway industry, Atelier B [1] and Rodin [13]. The KeY environment, somehow a successor of KIV, is designed to build proofs interactively, with the possibility to call efficient automated provers for solving some leaves of the proof [10, 11]. In the context of general purpose proof assistants, the need for adding automation to their general interactive theorem proving process is evident, for example in the environments ACL2, HOL Light, and more recently in Isabelle/HOL where the so-called Sledgehammer subtool is able to finish proofs using external SMT solvers [4]. 1.2 Common Issues in Automated Program Verification Interestingly, our analysis of the common situations where fully automatic provers fail, and switching to proof interaction is needed, is very similar to the analysis made in previous work mentioned above. Here are the main identified cases: - quantifier instantiation: a proof should be done by an adequate instantiation of a universally quantified hypothesis, that the automatic provers cannot discover. Providing the instantiation by hand helps. Similarly, for proving an existential goal, automated provers typically cannot guess the witness. This witness should be given by hand. - reasoning by cases: an explicit case reasoning can help the automatic provers. - controlling the context and the strategy of proof search: a prover would sometimes use most of its available time to try to solve the problem using a direction of thinking while a simple solution exists. Reducing the context manually can help. - inductive reasoning: some properties require reasoning by induction over an integer, an algebraic datatype, or on an inductively defined predicate; such reasoning steps are out-of-reach of common automated solvers, whereas applying an induction rule by hand usually results in sub-goals that can be automatically discharged. - non-linear integer arithmetic: it is typically hard for automatic provers, but a few manual proof steps can make such a proof feasible. Similarly, floating-point arithmetic is also very hard for automatic provers. ### 1.3 Contributions and Overview of the Paper Our goals are shared with the above-mentioned related work. However, in the context of SPARK, there is an additional issue that does not show up in previous work. Indeed, in the previous work mentioned above, the language of formulas and proof tasks (e.g. proof sequents) is directly the language in which the user writes her input problem: for example in the context of the B method, the logic language is B set theory in which the code of the B machines is written; in KeY, the underlying Dynamic Logic incorporates pieces of Java code to verify. In the context of SPARK, we have the additional issue that a proof task generated by the Why3 VC generator is written in a language that is very different from the Ada input language. Our approach proceeds in the following steps. In Section 2, we present what we added to the Why3 intermediate tool to provide interactive proving features. Section 3 presents the use of interactive proof from SPARK, inside GNAT Programming Studio, the Ada interface development environment. Section 4 concludes and discusses remaining future work. 2 Adding Interactive Proving in Why3 Figure 1 presents a general overview of Why3’s core architecture. The input files contain code with formal specifications, written in the dedicated language WhyML [8], which essentially consists of a set of functions or procedures annotated with contracts (pre- and post-conditions, loop invariants, etc.). The VC generator produces, from such a file, a set of proof tasks. A proof task, that we can denote as $\Gamma \vdash G$, consists in a set $\Gamma$ of logical declarations of types, function symbols, predicates, and hypotheses, and finally the logical formula $G$ representing the goal to prove. The soundness property of the VC generator expresses that if all generated proof tasks are valid logical statements, then the input program is safe: no runtime error can arise and formal contracts are satisfied. Consider the following toy example of a function written in WhyML. ```whyml let f (a:array int) (x:int) : int requires { a.length \geq 1000 } requires { 0 \leq x \leq 10 } requires { forall i. 0 \leq 4*i+1 < a.length \rightarrow a[4*i+1] \geq 0 } ensures { result \geq 0 } = let y = 2*x+1 in a[y*y] ``` The function $f$ takes as parameters an array $a$ and an integer $x$. The first two pre-conditions express two simple requirements on the size of array $a$ and an interval of possible values for $x$. The third pre-condition is a bit more complex, it expresses that for indexes that are a multiple of 4 plus 1, the values stored in $a$ are non-negative. The function $f$ returns an integer, denoted as the keyword `result` in the post-condition, with a post-condition expressing that the value at the index returned is also non-negative. The code simply returns the square of $2x + 1$. For such a code, Why3 generates as the VC the formula ```whyml forall a:array int, x:int. length a \geq 1000 \land (0 \leq x \land x \leq 10) \land (forall i:int. 0 \leq ((4 * i) + 1) \land ((4 * i) + 1) < length a \rightarrow a[(4 * i) + 1] \geq 0) \rightarrow (let y = (2 * x) + 1 in let o = y * y in (0 \leq o \land o < length a) \land a[o] \geq 0) ``` As one may guess such a formula can quickly become unreadable when code size grows, and is hardly suitable for human inspection. 2.1 Proof Tasks and Transformations The core of Why3 comes as a software library, written in the OCaml language and with a documented API, that proposes in particular data-types for terms, formulas and proof tasks, and a large collection of functions to operate on them. A central notion is the notion of transformation: an OCaml function that takes a proof task as input and returns a set of proof tasks. All implemented transformations are expected to be sound, in the sense that if all the resulting proof tasks are valid, then the original task is valid too. A simple example of such a transformation is splitting, which basically transforms a task of the form $\Gamma \vdash G_1 \land G_2 \land \cdots \land G_k$ into the set of tasks $\Gamma \vdash G_i$ for $1 \leq i \leq k$. The VC generator is designed so as to produce a single proof task for each procedure or function of the input code, as in the example above. To ease the visibility and understanding of the resulting formula, a generalized splitting transformation is typically applied, so as to decompose such a VC into a set of simpler VCs for specific properties to check, e.g. checking if an array index is in bounds, checking the preservation of a loop invariant, checking the pre-condition of a sub-procedure called, etc. Figure 2: Failed proof attempts shown in Why3IDE Figure 2 presents a screenshot of Why3’s graphical interface (Why3IDE) when given our toy program as input. The left part of that window is the proof task tree. The current selected line is for the proof task after applying the transformation named split_vc which implements the generalized splitting transformation mentioned above. Indeed the role of transformations is two-fold. The first role is to simplify the given task (such as the splitting above). In such a case a transformation is applied on user’s request inside the graphical interface. The second role is to preprocess a task before sending it to an external prover: typically an external prover does not support all features of Why3’s logic, such as polymorphic types, algebraic data types and the corresponding pattern-matching, recursive or inductive definition of predicates, etc. Hence, when the user wants to invoke an external prover on a given task, Why3 transparently applies some transformations to make the proof task fit into the logic of the target prover, before using an appropriate printer suitable for the back-end prover input syntax (e.g. SMT-LIB). On Figure 2 it can be seen that the provers Alt-Ergo, CVC4 and Z3 were invoked but none of them were able to discharge the expected post-condition. It is likely that the combination of non-linear arithmetic and the need for finding an appropriate instantiation for the hypothesis $H$ is the reason why they fail. Mixing quantifiers and arithmetic makes very difficult goals to prove for all categories of automatic provers: SMT solvers quantifier handling is based on triggers, that do not interact well with arithmetic, while TPTP provers have a more powerful handling of quantifiers but do not support arithmetic. This corresponds to the limitation of automatic provers that we called quantifier instantiation in Section 1.2. On top of the core architecture of Figure 1 Why3 features two major components: the proof session manager and the graphical interface. Adding support for interactive proving in this global architecture requires extensions that we detail in the subsections below: extensions of the GUI, extension of the transformation setting of the core architecture, and extensions in the proof session manager. A feature, that has important consequences on our approach presented below for adding interactive proving in Why3, is the genericity of transformations handling. The Why3 library is designed so that an additional user-written transformation can be dynamically loaded at run-time, using a mechanism of registration with a name. 2.2 Extending User Interface As shown in Figure 2, the graphical interface is naturally where proof tasks are displayed and where the user can decide which transformations to apply and which prover to call. Below the top-right part of the window, where the current proof task is displayed, and above the bottom-right part where different kinds of messages are displayed, we added a kind of command-line input field where the user can input arbitrary character text to form a command. If we consider our toy example, one possibility to progress towards proving the goal is to replace \( y^2 \) by the term \( 4*(x^2+x)+1 \). To achieve that, the user can directly input the text \[ \text{replace } y^2 \text{ by } 4*(x^2+x)+1 \] in the input field and hit the return key. An alternative possible transformation would be to instantiate the hypothesis \( H \) with \( x^2+x \), which can be done with the input \[ \text{instantiate } H \text{ with } x^2+x \] Figure 3 displays the GUI after trying both transformations. As seen on left, the transformation \( \text{replace } y^*y = 4*(x*x+x)+1 \) generated two subgoals: first proving the formula after the replacement, second proving that \( y^*y = 4*(x*x+x)+1 \). Both subgoals are proved by Alt-Ergo. Similarly, the transformation “\( \text{instantiate } H x*x+x \)” generates one subgoal where an additional hypothesis is present (the instance of \( H \) for the given particular value for \( i \)) and is also proved by Alt-Ergo. As can be seen, applying any of these two transformations is enough to finish the proof automatically. Even if this mechanism using a textual interface may seem old-fashioned, it permits a lot of genericity. We’ll see below how it simplifies the communication with a front-end such as SPARK. It also permits a lot of extra features that show themselves important in practice, such as searching in the proof context. ### 2.3 Adding Parameters to Proof Transformations A central design choice towards the introduction of interactive proofs is to reuse the existing infrastructure of transformations. Basically, since that infrastructure already allows the user to select among a given set of transformations to apply on a given proof task, we just have to extend this set, after extending the concept of transformations so that they can take parameters, like the two transformations used above: \( \text{replace} \) is a transformation that takes two terms as input. \( \text{instantiate} \) takes a hypothesis name and a term. We faced two main issues for this extension. First, the transformations can take various objects as parameters: a term, a formula, a name, a string, etc. It means that at the level of the API, we need a typing mechanism in order to declare what are the right types of objects to pass as parameters. Second, at the level of the interface, the data submitted by the user are just textual, so we need a generic infrastructure to parse them and turn them into objects of the right kind. At the level of the API, in order to handle the large variability of the kinds of transformation parameters, we were able to use the advanced concept of GADT (Generalized Abstract Data Types). An excerpt of the new declaration of transformation type in the API is as follows (the real one has 20 constructors): ```ocaml type _ trans_typ = | Ttrans_l : (task -> task list) trans_typ (** transformation with no argument, and many resulting tasks *) | Tstring : 'a trans_typ -> (string -> 'a) trans_typ (** transformation with a string as argument *) | Tprsymbol : 'a trans_typ -> (Decl.prsymbol -> 'a) trans_typ (** transformation with a Why3 proposition symbol as argument *) | Tterm : 'a trans_typ -> (Term.term -> 'a) trans_typ (** transformation with a Why3 term as argument *) | Topt : string * ('a -> 'c) trans_typ -> ('a option -> 'c) trans_typ (** transformation with an optional argument. The first string is the keyword introducing that optional argument*) ``` To implement a transformation like \( \text{instantiate} \) above, we first have to program it with an OCaml function, say \( \text{inst} \), of type \( \text{prsymbol} \rightarrow \text{term} \rightarrow \text{task list} \), and then register it under the proper name as follows ```ocaml wrap_and_register "instantiate" (Tprsymbol (Tterm Ttrans_l)) inst ``` Not only will it make the transformation available for use in the interface, but it will automatically proceed with the parsing, name resolution and typing of the textual arguments, as given by the type \( \text{(Tprsymbol (Tterm Ttrans_l))} \). This mechanism based on GADTs is powerful enough to handle optional parameters. For example, the \( \text{replace} \) transformation is declared with type ```ocaml (Tterm (Tterm (Topt ("in", Tprsymbol Ttrans_l)))) ``` which means that a third optional argument is allowed, of type `prsymbol` and introduced by the keyword `in` to say that the replacement should be done in the hypothesis of the given name instead of the goal, e.g. “`replace (length a) 1000 in H`”. Notice the large genericity of this mechanism, in particular the keyword used for introducing the optional argument. The genericity also comes from the `wrap_and_register` function which is defined once and for all and does all the hard job of parsing and typing arguments. In particular, the resolution of variable names given as arguments had to be carefully made consistent with the printing of the task, which can rename variables. Here is a quick summary of the major transformations with parameters that we added in Why3. They are supposed to cover the major needs for interaction as already listed in Section [1.2]. - case analysis on a formula (`case P`), on algebraic data (`destruct_alg t`), decomposition on propositional structure of a hypothesis (`destruct H`). For example, the transformation “`case P`” where `P` is an arbitrary formula would be transforming a task `Γ ⊢ G` into the two tasks `Γ, P ⊢ G` and `Γ, ¬P ⊢ G`. - introduction of an auxiliary hypothesis (`cut P, assert P`) or term (`pose x t`) - induction on integers, on inductive predicates - instantiation as seen above (`instantiate H t_1,...,t_k`), including existential case (`exists t`), or via direct application of a hypothesis to a goal (`apply H`) - various rewriting and computation transformations: `rewrite H (in H')`, `replace t_1 t_2 (in H)`, `subst x, subst_all`, etc. - context handling: `remove H_1,...,H_k, clear_but ...` - unfolding a definition: `unfold f` - import an extra theory: `use_th T` 2.4 Extending The Proof Session Mechanism A proof session is essentially a record of all the proof tasks generated from a given input file, and also a record of all transformations applied to these tasks. It is indeed an internal representation of the proof task tree displayed on the left part of Figure [2]. Such a session can be stored on disk, and can be reloaded to a former state by the user. A crucial feature of the session manager is to manage the changes if the input file is modified (e.g. more annotations are added): the manager implements a clever and sound `merging` operation to discover which parts of the proof session can be reused, which tasks are modified, and which external proofs should be replayed [5]. The Why3 session files do not store any internal representation to avoid any problem when the Why3 tools themselves evolve. Accordingly, we decided that the arguments of transformations should be stored under their textual form too. This definitely avoids potential problems with changes in internal representations, but still some problems with renaming could occur. For example, an automatically introduced name for any hypothesis, say `H1`, could perfectly be renamed into `H2`, e.g. if an extra annotation is added in the source code. It is thus perfectly normal that from time to time, while reloading a proof session, a transformation with argument does not apply anymore. In order to avoid the loss of any sub-proof tree, we implemented the new notion of `detached nodes` in the proof task tree. These nodes are a record of the state of the previous session, but without any corresponding proof task. We then implemented a mechanism to copy and paste fragments of proof trees from one node to another. This copy-paste mechanism showed itself very useful in practice for maintaining interactive proofs. 2.5 Examples We evaluated the new interactive proof features of Why3 on prior examples where some VCs could not be discharged except using the Coq proof assistant. Bobot et al. paper [6] illustrated the use of Why3 on the three challenges of the VerifyThis competition in 2012. On each of these case studies, at least one Coq proof was required. We have reconsidered the first challenge (Longest Repeated Substring) and were able to replace all three Coq proofs with interactive proofs. Interestingly, only very few transformations were needed, because we quickly arrived at simpler subgoals that are discharged by automatic solvers. Another illustrative example is the proof of Dijkstra’s shortest path algorithm on graphs: again, we were able to replace Coq proofs with interactive transformations and automatic solvers. We noticed that not only does it simplify the proofs, they are now easier to maintain in case of a change in Why3 implementation or standard library. We still have to evaluate to what extent the interaction using transformations with argument is easy to use for regular users. Moreover, we need more practice in order to see if this mechanism is of effective help for debugging proofs, as explained in the introduction. 3 Adding Interactive Proving in SPARK Although the SPARK verifier, called GNATprove, is based on Why3 for generating VCs and proving them, there is a large gap between the SPARK and WhyML programming languages. Therefore, the interactive proof features of Why3 cannot be used directly by SPARK users. The same issue arose in the past with the counterexample generation features of Why3, which required translation back to SPARK for use inside GNATprove [9]. That issue also involved interactions with users inside different IDEs, Why3IDE for Why3 users and GNAT Programming Studio for SPARK users, but that interaction was one-way only: the counterexamples output by provers were translated back to either Why3 or SPARK syntax and displayed in their respective IDE. Here, we need a two-way interaction where users can input commands (possibly with elements of the code as parameters) and Why3 returns a modified set of proof tasks. We start by presenting a simple SPARK program that cannot be proved with automatic provers in Section 3.1. Then we describe in Section 3.2 the client-server architecture that we have adopted for two-way communication between the IDE for SPARK programs and Why3. We detail in Section 3.3 how we translate back and forth between user-level names in SPARK and internal names in Why3. Finally, we explain in Section 3.4 how to complete the proof of our example. 3.1 Unprovable Code Example The code in Figure 4 is a simplified version of an excerpt from a bounded string library. This code contains a post-condition that cannot be proved today by any prover available with SPARK (Alt-Ergo, CVC4, Z3). The reason for this unproved property is characteristic of the kind of problems faced by users of a technology like SPARK. It is in the class of problems called quantifier instantiation already presented in Section 2. The code in Figure 4 computes the location of a sub-list of integers within a list of integers, when the sub-list is contained in the list. Lists are implemented here as SPARK arrays starting at index 1 and ranging over positive indexes. The post-condition of function Location introduced by Post states the following properties: the result of the function ranges from 0 to the length of the list; a positive result is used when the sub-list is contained in the list, and value 0 is used as result otherwise. For the sake of type List is array (Positive range <>) of Integer with Predicate => List'First = 1; subtype Natural_Index is Integer range 0 .. Positive'Last; function Contains (Within : List; Fragment : List) return Boolean is (Fragment'Length in 1 .. Within'Length and then (for some K in 1 .. (Within'Length - Fragment'Length + 1) => Within (K .. (K - 1 + Fragment'Length)) = Fragment)) with Ghost; function Location (Fragment : List; Within : List) return Natural_Index with Post => Location'Result in 0 .. Within'Length and then (if Contains (Within, Fragment) then Location'Result > 0 else Location'Result = 0) is begin if Fragment'Length in 1 .. Within'Length then for K in 1 .. (Within'Length - Fragment'Length + 1) loop if Within (K .. (K - 1 + Fragment'Length)) = Fragment then return K; end if; pragma Loop_Invariant (for all J in 1 .. K => Within (J .. (J - 1 + Fragment'Length)) /= Fragment); end loop; end if; return 0; end Location; Figure 4: Example of SPARK code with an unprovable post-condition simplicity, we do not state the complete post-condition of Location (that would make precise that the resulting index is the location of the match), but this could be done easily. This post-condition relies on the definition in function Contains of what it means for a sub-list Fragment to be contained in a list Within. This function is only used in specifications, which is en- forced by marking it as Ghost. It is defined as an expression function, quantifying with for some (Ada existential quantification) over a range of scalar values the property that the sub-list Fragment is equal to a moving slice of the list Within (K .. (K - 1 + Fragment'Length)). This last expression is the slice of array Within from index K to index (K - 1 + Fragment'Length). For the sake of simplicity, function Location naively iterates over each possible location for a match and tests for equality of the corresponding slice with the argument sub-list. The loop invariant repeats the post-condition and specializes it for the \( K^{th} \) iteration of the loop, so that the loop invariant is itself provable and can be used to prove the post-condition. When running GNATprove on this code, it proves automatically the absence of runtime errors (no integer overflows, no array access out of bounds, no other runtime check failures), as well as the loop invariant in function Location, but it does not prove the post-condition of that function. The user interaction is simple here. The user requests that this program is verified by GNATprove inside GPS (GNAT Programming Studio), and a few seconds later receives the output of the tool as mes- sages attached to program lines. Between these two instants, GPS called GNATprove; GNATprove translated the SPARK program into an equivalent WhyML program w.r.t. axiomatic semantics for generation of VCs; an internal program using Why3 API successively generated VCs by calling Why3 VC generator and dispatched VCs to provers; this internal program collected the output of provers and returned the overall results to GNATprove; GNATprove generated and adapted results for GPS, that displayed these results to the user. All this work occurred transparently for the user, who never had to see the generated WhyML code in Why3IDE, or to launch Why3 commands in a terminal. A less-than-ideal process for completing the proof of the post-condition would consist in asking the user to open the session file generated by Why3 in Why3IDE to complete the proof using the interactive proof feature described in Section 2. While this would work, this is not a suitable solution in an industrial context. Indeed, asking users to interact directly with a generated artifact in a different language (the Why3 file) is akin to asking them to debug their programs at assembly level. While this is possible and sometimes useful, it is best left to rare occasions when this is really needed, and instead interaction should be done as much as possible at source code level. ### 3.2 Client-Server Architecture Most modern IDEs like GPS provide client-server interfaces with various tools such as debuggers or, in the context of formal verification, with proof assistants such as Emacs Proof- General support for Coq, Isabelle, Lego, HOL, etc. We have adopted a client-server architecture to allow two-way communications between the interactive proof module (acting as a server) and the client IDE, which can be Why3IDE or GPS here. The server handles requests from the user (through the IDE), such as proof transformations (see Section 2.3) and direct calls to provers. After the requested action terminates, the server informs the IDE of changes to the proof task tree. For the integration in GPS, we developed both a wrapper in OCaml for the underlying service to act as the server, and a plugin in Python to communicate with the server from GPS. The server takes the session file as initial argument, gets its input in JSON format on standard input, calls Why3 core services, updates the session file accordingly, and returns its output in JSON format on the standard output. The plugin modifies GPS interface to add a console window for command-line interaction, a window to display VCs and a window to display the proof task tree. The plugin translates requests made by the user on the command-line interface into JSON requests that are sent to the server, and translates back the server notifications into updates of the graphical user interface (adding nodes in the proof task tree, changing the VC, etc.). For example, as seen in Figure 5, GPS first starts the server, then the server returns the initial proof task tree which is printed by GPS. When the user clicks a node, GPS asks for the corresponding proof task which the server returns and GPS then prints it for the user. This goes on until the --- **Figure 5:** Schematic of the interactions between IDE and server GPS IDE | Why3 interactive server ---|--- Launch the server | Notify starting proof tree Ask for proof task *i* | Send the task ... | Exit request Save session and stops | time | time Lightweight Interactive Proving inside an Automatic Program Verifier Figure 6: Example of interactive proof in GPS user exits manual proof which GPS interprets by sending the exit request: Why3 interactive server ends its execution. The death of the process is detected by GPS which goes back to its normal interface. A schematic of the interactions between the IDE and the server is depicted in Figure 5. With very little effort, we transformed a generic IDE such as GPS into an elementary proof assistant. As seen on Figure 6, the user interface in GPS is similar to the one in Why3IDE presented in Figure 2. The same windows are present but they are not located at the same place. From left to right, we can see the SPARK code window, the proof task window and the proof task tree window. The command-line console is displayed as a bottom panel sharing its window with other panels for Messages (tool output) and Locations (tool messages). The user can type commands for applying transformations and calling provers inside the command-line console, similar to what we saw for Why3IDE. One can observe that the VC from Figure 6 is much more complex than the simple VC we generated with Why3 in Figure 2. This is mainly a consequence of the complexity of the WhyML code generated from SPARK. Simple data structures and control flow in SPARK are modeled with much more complex data structures and control flow in WhyML to encode the semantic features of SPARK. This complexity gets exposed in the VC generated by Why3 VC generator, as it combines the complexity of data structures and control flow. This inherent complexity cannot be eliminated and must be dealt with to present SPARK users with understandable VCs. 3.3 Printing of Proof Tasks and Parsing of Transformation Arguments In order for users to be able to relate the proof task to their code, it is necessary to use the names of source SPARK entities in the proof task instead of the generated Why3 names. This is achieved by creating a mapping from Why3 names to their source SPARK name during the generation of Why3 code from SPARK. This mapping is embedded in the WhyML code using labels, a generic mechanism in Why3 to attach strings to terms. For example, in the following code snippet the label "name:Not_y" is attached to the identifier y: ```plaintext let f (a:array int) (x:int) : int ensures {result = 20} = let y "name:Not_y" = 2*x+1 in y*y ``` Labels on terms are preserved by VC generation and transformations. For example, the label "name:Not_y" remains attached to the constants derived from y in the final VC obtained after VC generation and transformations. Thus, when printing a proof task, the name of an identifier can be replaced by the name found in the attached label when there is one. Thus we get the following proof task where the source SPARK name Not_y is used instead of the generated Why3 name y: ```plaintext constant x : int constant Not_y : int = (2 * x) + 1 goal VC f : (Not_y * Not_y) = 20 ``` Different Why3 names with the same name label are currently distinguished by appending a unique number to the source SPARK name. The consecutive problem of interpreting the names of SPARK entities as Why3 names occurs when the user types a command with arguments referring to the names of SPARK entities. Why3 uses here the inverse map from distinguished SPARK names to Why3 names associated to a given proof task to translate automatically the arguments of transformations. ### 3.4 Application to the Unprovable Code Example With the interactive proof interface in GPS, we complete the proof of the post-condition of Location from Section 3.1. For the sake of simplicity, we only describe the proof part of the then branch in this subsection, shown in the screenshot in Figure 6. The root of the proof task tree is green, showing that the initial VC was fully proved. The interactive proof proceeds by deriving a contradiction from the loop invariant property in the last iteration of the loop and the condition of the then branch Contains (Within, Fragment). This requires unfolding the definition of Contains and finding suitable instances of quantified properties. In this case, the proof task tree is linear and a complete proof script is the following: ```plaintext instantiate contains__def_axiom Within,Fragment rewrite Hinst in H16 destruct H16 destruct H destruct H instantiate H18 k ``` The intuition of the proof is that we need to explain to the tool how to combine the hypotheses coming from the loop invariant and the definition of Contains. The first two transformations (instantiate, rewrite) are used to make the axiomatic definition of Contains, applied to the right arguments, appear as a hypothesis in the context. The three calls to the destruct transformation are used to destruct the head connective of the hypothesis that appeared. They transform $\Gamma, H : (A \land \exists k. P(k)) \vdash G$ into $\Gamma, H : A, k : \text{int}, H_1 : P(k) \vdash G$. The objective is to use $k$ as an instance for the loop invariant property (to make the contradiction appear). This is exactly what transformation "instantiate H18 k" does. The quantification disappears and SMT solvers can now solve the goal. This completes the proof of the then branch of the post-condition of Location. The else branch is handled similarly, which completes the proof of the program. 4 Conclusions and Future Work We brought interactive proving features to the SPARK verification environment for Ada. This was done by conservatively extending the intermediate Why3 tool by allowing to pass arguments to proof task transformations. The design is generic enough to allow simple addition of new transformations via Why3’s API. The proof session mechanism and the graphical interface have been extended to allow simple user interaction and facilities for proof maintenance. A few program examples were re-proved, showing that former external interactive proofs using Coq can be substituted with light interactive proofs in our new setting. The user interface of Why3 was redesigned under the form of a generic client-server architecture, allowing to bring interactive proving features to the SPARK front-end inside the regular GPS graphical interface. Future Work will go into several directions. First, we certainly want to enlarge the set of transformations with arguments, to cover more needs for interactive proofs. The needed transformations will be identified from practice. Notice that others are already reusing our API to implement new transformations, for example for doing proofs by reflection for complex non-linear goals [14]. Second, we plan to reuse the client-server architecture to provide an alternative interface for Why3, this time within a web browser. We also plan to bring the interactive proof feature to the Frama-C front-end for C code, by augmenting the existing Jessie plug-in of Frama-C that uses Why3 internally. A third longer-term work is to allow for more customization of the printing of tasks and the parsing of transformation arguments: from SPARK, we would like the terms in proof tasks expressed in a more Ada-like syntax, in particular for arrays. For this we will need to design in Why3’s generic API a possibility to register printers and parsers. A fourth even longer-term issue is the question of trust in the implemented transformations. Since we implement more and more complex OCaml code for that purpose, a general need for verifying the soundness of the transformations shows up. It is likely that we will need to design a language of proof terms or proof certificates to achieve this long-term goal. References
{"Source-Url": "https://inria.hal.science/hal-01936302/file/article.pdf", "len_cl100k_base": 9214, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 38771, "total-output-tokens": 11671, "length": "2e13", "weborganizer": {"__label__adult": 0.0003235340118408203, "__label__art_design": 0.00030541419982910156, "__label__crime_law": 0.0003514289855957031, "__label__education_jobs": 0.0005044937133789062, "__label__entertainment": 5.370378494262695e-05, "__label__fashion_beauty": 0.00012981891632080078, "__label__finance_business": 0.00016129016876220703, "__label__food_dining": 0.00030922889709472656, "__label__games": 0.0005097389221191406, "__label__hardware": 0.0006823539733886719, "__label__health": 0.0004150867462158203, "__label__history": 0.00020003318786621096, "__label__home_hobbies": 6.74128532409668e-05, "__label__industrial": 0.0003437995910644531, "__label__literature": 0.00019788742065429688, "__label__politics": 0.0002244710922241211, "__label__religion": 0.0004279613494873047, "__label__science_tech": 0.0171356201171875, "__label__social_life": 7.647275924682617e-05, "__label__software": 0.0056610107421875, "__label__software_dev": 0.97119140625, "__label__sports_fitness": 0.00027823448181152344, "__label__transportation": 0.0004494190216064453, "__label__travel": 0.00017952919006347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47193, 0.02906]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47193, 0.40841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47193, 0.88201]], "google_gemma-3-12b-it_contains_pii": [[0, 983, false], [983, 4677, null], [4677, 8485, null], [8485, 10634, null], [10634, 14187, null], [14187, 16301, null], [16301, 17784, null], [17784, 21630, null], [21630, 25211, null], [25211, 28835, null], [28835, 31548, null], [31548, 35018, null], [35018, 37209, null], [37209, 40410, null], [40410, 44154, null], [44154, 47193, null]], "google_gemma-3-12b-it_is_public_document": [[0, 983, true], [983, 4677, null], [4677, 8485, null], [8485, 10634, null], [10634, 14187, null], [14187, 16301, null], [16301, 17784, null], [17784, 21630, null], [21630, 25211, null], [25211, 28835, null], [28835, 31548, null], [31548, 35018, null], [35018, 37209, null], [37209, 40410, null], [40410, 44154, null], [44154, 47193, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47193, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47193, null]], "pdf_page_numbers": [[0, 983, 1], [983, 4677, 2], [4677, 8485, 3], [8485, 10634, 4], [10634, 14187, 5], [14187, 16301, 6], [16301, 17784, 7], [17784, 21630, 8], [21630, 25211, 9], [25211, 28835, 10], [28835, 31548, 11], [31548, 35018, 12], [35018, 37209, 13], [37209, 40410, 14], [40410, 44154, 15], [44154, 47193, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47193, 0.0]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
789d34306480643b455aece5318e63aa60dc7f73
G Julia Julia is a scientific programming language that is free and open source.¹ It is a relatively new language that borrows inspiration from languages like Python, MATLAB, and R. It was selected for use in this book because it is sufficiently high level² so that the algorithms can be compactly expressed and readable while also being fast. This book is compatible with Julia version 1.7. This appendix introduces the concepts necessary for understanding the included code, omitting many of the advanced features of the language. G.1 Types Julia has a variety of basic types that can represent data given as truth values, numbers, strings, arrays, tuples, and dictionaries. Users can also define their own types. This section explains how to use some of the basic types and how to define new types. G.1.1 Booleans The Boolean type in Julia, written as `Bool`, includes the values `true` and `false`. We can assign these values to variables. Variable names can be any string of characters, including Unicode, with a few restrictions. ```plaintext α = true done = false ``` The variable name appears on the left side of the equal sign; the value that variable is to be assigned is on the right side. ¹ Julia may be obtained from http://julialang.org. ² In contrast with languages like C++, Julia does not require programmers to worry about memory management and other lower-level details, yet it allows low-level control when needed. We can make assignments in the Julia console. The console, or REPL (for read, eval, print, loop), will return a response to the expression being evaluated. The `#` symbol indicates that the rest of the line is a comment. ``` julia> x = true true julia> y = false; # semicolon suppresses the console output julia> typeof(x) Bool julia> x == y # test for equality false ``` The standard Boolean operators are supported: ``` julia> !x # not false julia> x && y # and false julia> x || y # or true ``` ### G.1.2 Numbers Julia supports integer and floating-point numbers, as shown here: ``` julia> typeof(42) Int64 julia> typeof(42.0) Float64 ``` Here, `Int64` denotes a 64-bit integer, and `Float64` denotes a 64-bit floating-point value.\(^3\) We can perform the standard mathematical operations: ``` julia> x = 4 4 julia> y = 2 2 julia> x + y 6 julia> x - y 2 julia> x * y 8 julia> x / y 2.0 ``` \(^3\) On 32-bit machines, an integer literal like `42` is interpreted as an `Int32`. Note that the result of $x / y$ is a Float64, even when $x$ and $y$ are integers. We can also perform these operations at the same time as an assignment. For example, $x += 1$ is shorthand for $x = x + 1$. We can also make comparisons: ``` julia> 3 > 4 false julia> 3 >= 4 false julia> 3 ≥ 4 # unicode also works, use \ge[tab] in console false julia> 3 < 4 true julia> 3 <= 4 true julia> 3 ≤ 4 # unicode also works, use \le[tab] in console true julia> 3 == 4 false julia> 3 < 4 < 5 true ``` ### G.1.3 Strings A string is an array of characters. Strings are not used very much in this textbook except to report certain errors. An object of type `String` can be constructed using " characters. For example: ``` julia> x = "optimal" "optimal" julia> typeof(x) String ``` G.1.4 Symbols A symbol represents an identifier. It can be written using the `:` operator or constructed from strings: ```julia julia> :A :A julia> :Battery :Battery julia> Symbol("Failure") :Failure ``` G.1.5 Vectors A vector is a one-dimensional array that stores a sequence of values. We can construct a vector using square brackets, separating elements by commas: ```julia julia> x=[]; # empty vector julia> x=trues(3); # Boolean vector containing three trues julia> x=ones(3); # vector of three ones julia> x=zeros(3); # vector of three zeros julia> x=rand(3); # vector of three random numbers between 0 and 1 julia> x=[3, 1, 4]; # vector of integers julia> x=[3.1415, 1.618, 2.7182]; # vector of floats ``` An array comprehension can be used to create vectors: ```julia julia> [sin(x) for x in 1:5] 5-element Vector{Float64}: 0.8414709848078965 0.9092974268256817 0.1411200080598672 -0.7568024953079282 -0.9589242746631385 ``` We can inspect the type of a vector: ```julia julia> typeof([3, 1, 4]) # 1-dimensional array of Int64s Vector{Int64} (alias for Array{Int64, 1}) julia> typeof([3.1415, 1.618, 2.7182]) # 1-dimensional array of Float64s Vector{Float64} (alias for Array{Float64, 1}) ``` We index into vectors using square brackets: We can pull out a range of elements from an array. Ranges are specified using a colon notation: ``` julia> x = [1, 2, 5, 3, 1] 5-element Vector{Int64}: 1 2 5 3 1 julia> x[1:3] # pull out the first three elements 3-element Vector{Int64}: 1 2 5 julia> x[1:2:end] # pull out every other element 3-element Vector{Int64}: 1 5 1 julia> x[end:-1:1] # pull out all the elements in reverse order 5-element Vector{Int64}: 1 3 5 2 1 ``` We can perform a variety of operations on arrays. The exclamation mark at the end of function names is used to indicate that the function *mutates* (i.e., changes) the input: ``` julia> length(x) 5 julia> [x, x] # concatenation 2-element Vector{Vector{Int64}}: ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com appendix g. julia ``` [1, 2, 5, 3, 1] [1, 2, 5, 3, 1] julia> push!(x, -1) # add an element to the end 6-element Vector{Int64}: 1 2 5 3 1 -1 julia> pop!(x) # remove an element from the end -1 julia> append!(x, [2, 3]) # append [2, 3] to the end of x 7-element Vector{Int64}: 1 2 5 3 1 2 3 julia> sort!(x) # sort the elements, altering the same vector 7-element Vector{Int64}: 1 1 2 2 3 3 5 julia> sort(x); # sort the elements as a new vector julia> x[1] = 2; print(x) # change the first element to 2 [2, 1, 2, 2, 3, 3, 5] julia> x = [1, 2]; julia> y = [3, 4]; julia> x + y # add vectors 2-element Vector{Int64}: 4 6 julia> 3x - [1, 2] # multiply by a scalar and subtract 2-element Vector{Int64}: 2 4 julia> using LinearAlgebra ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com It is often useful to apply various functions elementwise to vectors. This is a form of broadcasting. With infix operators (e.g., +, *, and ^), a dot is prefixed to indicate elementwise broadcasting. With functions like \texttt{sqrt} and \texttt{sin}, the dot is postfixed: \begin{verbatim} julia> x .* y # elementwise multiplication 2-element Vector{Int64}: 3 8 julia> x .* 2 # elementwise squaring 2-element Vector{Int64}: 1 4 julia> sin.(x) # elementwise application of sin 2-element Vector{Float64}: 0.8414709848078965 0.9092974268256817 julia> sqrt.(x) # elementwise application of sqrt 2-element Vector{Float64}: 1.0 1.4142135623730951 \end{verbatim} ### G.1.6 Matrices A \textit{matrix} is a two-dimensional array. Like a vector, it is constructed using square brackets. We use spaces to delimit elements in the same row and semicolons to delimit rows. We can also index into the matrix and output submatrices using ranges: \begin{verbatim} julia> X = [1 2 3; 4 5 6; 7 8 9; 10 11 12]; julia> typeof(X) # a 2-dimensional array of Int64s Matrix{Int64} (alias for Array{Int64, 2}) julia> X[2] # second element using column-major ordering 4 julia> X[3,2] # element in third row and second column 8 \end{verbatim} julia> X[1,:] # extract the first row 3-element Vector{Int64}: 1 2 3 julia> X[:,2] # extract the second column 4-element Vector{Int64}: 2 5 8 11 julia> X[:,1:2] # extract the first two columns 4x2 Matrix{Int64}: 1 2 4 5 7 8 10 11 julia> X[1:2,1:2] # extract a 2x2 submatrix from the top left of x 2x2 Matrix{Int64}: 1 2 4 5 julia> Matrix{Float64} # alias for a 2-dimensional array Matrix{Float64} (alias for Array{Float64, 2}) We can also construct a variety of special matrices and use array comprehen- sions: julia> Matrix(1.0I, 3, 3) # 3x3 identity matrix 3x3 Matrix{Float64}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 1.0 julia> Matrix(Diagonal([3, 2, 1])) # 3x3 diagonal matrix with 3, 2, 1 on diagonal 3x3 Matrix{Int64}: 3 0 0 0 2 0 0 0 1 julia> zeros(3,2) # 3x2 matrix of zeros 3x2 Matrix{Float64}: 0.0 0.0 0.0 0.0 0.0 0.0 julia> rand(3,2) # 3x2 random matrix 3x2 Matrix{Float64}: 0.637127 0.839181 Matrix operations include the following: ```julia julia> X' # complex conjugate transpose 3×4 adjoint(::Matrix{Int64}) with eltype Int64: 1 4 7 10 2 5 8 11 3 6 9 12 julia> 3X.+2 # multiplying by scalar and adding scalar 4×3 Matrix{Int64}: 5 8 11 14 17 20 23 26 29 32 35 38 julia> X = [1 3; 3 1]; # create an invertible matrix julia> inv(X) # inversion 2×2 Matrix{Float64}: -0.125 0.375 0.375 -0.125 julia> pinv(X) # pseudoinverse (requires LinearAlgebra) 2×2 Matrix{Float64}: -0.125 0.375 0.375 -0.125 julia> det(X) # determinant (requires LinearAlgebra) -8.0 julia> [X X] # horizontal concatenation, same as hcat(X, X) 2×4 Matrix{Int64}: 1 3 1 3 3 1 3 1 julia> [X; X] # vertical concatenation, same as vcat(X, X) 4×2 Matrix{Int64}: 1 3 3 1 1 3 3 1 julia> sin.(X) # elementwise application of sin 2×2 Matrix{Float64}: 0.841471 0.14112 ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 0.14112 0.841471 \texttt{julia> map(sin, X)} \quad \# \text{elementwise application of sin} 2x2 Matrix\{Float64\}: 0.841471 0.14112 0.14112 0.841471 \texttt{julia> vec(X)} \quad \# \text{reshape an array as a vector} 4-element Vector\{Int64\}: 1 3 3 1 \subsection*{G.1.7 Tuples} A \textit{tuple} is an ordered list of values, potentially of different types. They are constructed with parentheses. They are similar to vectors, but they cannot be mutated: \texttt{julia> x = ()} \quad \# \text{the empty tuple} () \texttt{julia> isempty(x)} true \texttt{julia> x = (1,)} \quad \# \text{tuples of one element need the trailing comma} (1,) \texttt{julia> typeof(x)} Tuple\{Int64\} \texttt{julia> x = (1, 0, [1, 2], 2.5029, 4.6692)} \quad \# \text{third element is a vector} (1, 0, [1, 2], 2.5029, 4.6692) \texttt{julia> typeof(x)} Tuple\{Int64, Int64, Vector\{Int64\}, Float64, Float64\} \texttt{julia> x[2]} 0 \texttt{julia> x[end]} 4.6692 \texttt{julia> x[4:end]} (2.5029, 4.6692) \texttt{julia> length(x)} 5 \texttt{julia> x = (1, 2)} (1, 2) \texttt{julia> a, b = x;} \texttt{julia> a} 1 \texttt{julia> b} 2 G.1.8 Named Tuples A named tuple is like a tuple, but each entry has its own name: ```julia julia> x = (a=1, b=-Inf) (a = 1, b = -Inf) julia> x isa NamedTuple true julia> x.a 1 julia> a, b = x; julia> a 1 julia> (; :a=>10) (a = 10, ) julia> (; :a=>10, :b=>11) (a = 10, b = 11) julia> merge(x, (d=3, e=10)) # merge two named tuples (a = 1, b = -Inf, d = 3, e = 10) ``` G.1.9 Dictionaries A dictionary is a collection of key-value pairs. Key-value pairs are indicated with a double arrow operator =>. We can index into a dictionary using square brackets, just as with arrays and tuples: ```julia julia> x = Dict(); # empty dictionary julia> x[3] = 4 # associate key 3 with value 4 4 julia> x = Dict(3=>4, 5=>1) # create a dictionary with two key-value pairs Dict{Int64, Int64} with 2 entries: 5 => 1 3 => 4 julia> x[5] # return the value associated with key 5 1 julia> haskey(x, 3) # check whether dictionary has key 3 true julia> haskey(x, 4) # check whether dictionary has key 4 false ``` G.1.10 Composite Types A composite type is a collection of named fields. By default, an instance of a composite type is immutable (i.e., it cannot change). We use the `struct` keyword and then give the new type a name and list the names of the fields: ```plaintext struct A a b end ``` Adding the keyword `mutable` makes it so that an instance can change: ```plaintext mutable struct B a b end ``` Composite types are constructed using parentheses, between which we pass in values for each field: ```plaintext x = A(1.414, 1.732) ``` The double-colon operator can be used to specify the type for any field: ```plaintext struct A a::Int64 b::Float64 end ``` These type annotations require that we pass in an `Int64` for the first field and a `Float64` for the second field. For compactness, this book does not use type annotations, but it is at the expense of performance. Type annotations allow Julia to improve runtime performance because the compiler can optimize the underlying code for specific types. G.1.11 Abstract Types So far we have discussed concrete types, which are types that we can construct. However, concrete types are only part of the type hierarchy. There are also abstract types, which are supertypes of concrete types and other abstract types. We can explore the type hierarchy of the `Float64` type shown in figure G.1 using the `supertype` and `subtypes` functions: ![Figure G.1. The type hierarchy for the Float64 type.](image-url) G.1. Types `julia` supertype(Float64) AbstractFloat `julia` supertype(AbstractFloat) Real `julia` supertype(Real) Number `julia` supertype(Number) Any `julia` supertype(Any) # Any is at the top of the hierarchy Any `julia` using InteractiveUtils # required for using subtypes in scripts `julia` subtypes(AbstractFloat) # different types of AbstractFloats 4-element Vector{Any}: BigFloat Float16 Float32 Float64 `julia` subtypes(Float64) # Float64 does not have any subtypes Type[] We can define our own abstract types: ```julia abstract type C end abstract type D <: C end # D is an abstract subtype of C struct E <: D # E is a composite type that is a subtype of D a end ``` G.1.12 Parametric Types Julia supports *parametric types*, which are types that take parameters. The parameters to a parametric type are given within braces and delimited by commas. We have already seen a parametric type with our dictionary example: ```julia julia> x = Dict(3 => 1.4, 1 => 5.9) Dict{Int64, Float64} with 2 entries: 3 => 1.4 1 => 5.9 ``` For dictionaries, the first parameter specifies the key type, and the second parameter specifies the value type. The example has `Int64` keys and `Float64` values, making the dictionary of type `Dict{Int64, Float64}`. Julia was able to infer these types based on the input, but we could have specified it explicitly: While it is possible to define our own parametric types, we do not need to do so in this text. ### G.2 Functions A function maps its arguments, given as a tuple, to a result that is returned. #### G.2.1 Named Functions One way to define a named function is to use the `function` keyword, followed by the name of the function and a tuple of names of arguments: ```julia function f(x, y) return x + y end ``` We can also define functions compactly using assignment form: ```julia julia> f(x, y) = x + y; julia> f(3, 0.1415) 3.1415 ``` #### G.2.2 Anonymous Functions An anonymous function is not given a name, though it can be assigned to a named variable. One way to define an anonymous function is to use the arrow operator: ```julia julia> h = x -> x^2 + 1 # assign anonymous function with input x to a variable h #1 (generic function with 1 method) julia> h(3) 10 julia> g(f, a, b) = [f(a), f(b)]; # applies function f to a and b and returns array julia> g(h, 5, 10) 2-element Vector{Int64}: 26 101 julia> g(x->sin(x)+1, 10, 20) 2-element Vector{Float64}: 0.4559788891106302 1.9129452507276277 ``` G.2.3 Callable Objects We can define a type and associate functions with it, allowing objects of that type to be callable: ``` julia> (x::A)() = x.a + x.b # adding a zero-argument function to the type A defined earlier julia> (x::A)(y) = y*x.a + x.b # adding a single-argument function julia> x = A(22, 8); julia> x() 30 julia> x(2) 52 ``` G.2.4 Optional Arguments We can assign a default value to an argument, making the specification of that argument optional: ``` julia> f(x=10) = x^2; julia> f() 100 julia> f(3) 9 julia> f(x, y, z=1) = x*y + z; julia> f(1, 2, 3) 5 julia> f(1, 2) 3 ``` G.2.5 Keyword Arguments Functions may use keyword arguments, which are arguments that are named when the function is called. Keyword arguments are given after all the positional arguments. A semicolon is placed before any keywords, separating them from the other arguments: ``` julia> f(; x = 0) = x + 1; julia> f() 1 julia> f(x = 10) 11 julia> f(x, y = 10; z = 2) = (x + y)*z; julia> f(1) ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com G.2.6 Dispatch The types of the arguments passed to a function can be specified using the double colon operator. If multiple methods of the same function are provided, Julia will execute the appropriate method. The mechanism for choosing which method to execute is called dispatch: ``` julia> f(x::Int64) = x + 10; julia> f(x::Float64) = x + 3.1415; julia> f(1) 11 julia> f(1.0) 4.141500000000001 julia> f(1.3) 4.4415000000000004 ``` The method with a type signature that best matches the types of the arguments given will be used: ``` julia> f(x) = 5; julia> f(x::Float64) = 3.1415; julia> f([3, 2, 1]) 5 julia> f(0.00787499699) 3.1415 ``` G.2.7 Splatting It is often useful to *splat* the elements of a vector or a tuple into the arguments to a function using the ... operator: G.3 Control Flow We can control the flow of our programs using conditional evaluation and loops. This section provides some of the syntax used in the book. G.3.1 Conditional Evaluation Conditional evaluation will check the value of a Boolean expression and then evaluate the appropriate block of code. One of the most common ways to do this is with an `if` statement: ```julia if x < y # run this if x < y elseif x > y # run this if x > y else # run this if x == y end ``` We can also use the ternary operator with its question mark and colon syntax. It checks the Boolean expression before the question mark. If the expression evaluates to true, then it returns what comes before the colon; otherwise, it returns what comes after the colon: ```julia f(x) = x > 0 ? x : 0; f(-10) # 0 f(10) # 10 ``` G.3.2 Loops A loop allows for repeated evaluation of expressions. One type of loop is the while loop, which repeatedly evaluates a block of expressions until the specified condition after the `while` keyword is met. The following example sums the values in the array $X$: $$X = [1, 2, 3, 4, 6, 8, 11, 13, 16, 18]$$ ```julia s = 0 while !isempty(X) s += pop!(X) end ``` Another type of loop is the for loop, which uses the `for` keyword. The following example will also sum over the values in the array $X$ but will not modify $X$: ```julia X = [1, 2, 3, 4, 6, 8, 11, 13, 16, 18] s = 0 for y in X s += y end ``` The `in` keyword can be replaced by `=` or `∈`. The following code block is equivalent: ```julia X = [1, 2, 3, 4, 6, 8, 11, 13, 16, 18] s = 0 for i = 1:length(X) s += X[i] end ``` G.3.3 Iterators We can iterate over collections in contexts such as for loops and array comprehensions. To demonstrate various iterators, we will use the `collect` function, which returns an array of all items generated by an iterator: ```julia julia> X = ["feed", "sing", "ignore"]; julia> collect(enumerate(X)) # return the count and the element 3-element Vector{Tuple{Int64, String}}: (1, "feed") (2, "sing") (3, "ignore") julia> collect(eachindex(X)) # equivalent to 1:length(X) 3-element Vector{Int64}: ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com 1 2 3 ```julia julia> Y = [-5, -0.5, 0]; julia> collect(zip(X, Y)) # iterate over multiple iterators simultaneously 3-element Vector{Tuple{String, Float64}}: ("feed", -5.0) ("sing", -0.5) ("ignore", 0.0) julia> import IterTools: subsets julia> collect(subsets(X)) # iterate over all subsets 8-element Vector{Vector{String}}: [] ["feed"] ["sing"] ["feed", "sing"] ["ignore"] ["feed", "ignore"] ["sing", "ignore"] ["feed", "sing", "ignore"] julia> collect(eachindex(X)) # iterate over indices into a collection 3-element Vector{Int64}: 1 2 3 ``` ```julia julia> Z = [1 2; 3 4; 5 6]; julia> import Base.Iterators: product julia> collect(product(X, Y)) # iterate over Cartesian product of multiple iterators 3×3 Matrix{Tuple{String, Float64}}: ("feed", -5.0) ("feed", -0.5) ("feed", 0.0) ("sing", -5.0) ("sing", -0.5) ("sing", 0.0) ("ignore", -5.0) ("ignore", -0.5) ("ignore", 0.0) ``` ### G.4 Packages A package is a collection of Julia code and possibly other external libraries that can be imported to provide additional functionality. This section briefly reviews a few of the key packages that we build upon in this book. To add a registered package like Distributions.jl, we can run ```julia using Pkg Pkg.add("Distributions") ``` To update packages, we use ```julia ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. To use a package, we use the keyword `using` as follows: ```julia using Distributions ``` ### G.4.1 Graphs.jl We use the Graphs.jl package (version 1.4) to represent graphs and perform operations on them: ```julia using Graphs G = SimpleDiGraph(3); # create a directed graph with three nodes add_edge!(G, 1, 3); # add edge from node 1 to 3 add_edge!(G, 1, 2); # add edge from node 1 to 2 rem_edge!(G, 1, 3); # remove edge from node 1 to 3 add_edge!(G, 2, 3); # add edge from node 2 to 3 typeof(G) # graph of type SimpleDiGraph ``` ```julia ev(G) # number of nodes (also called vertices) 3 outneighbors(G, 1) # list of outgoing neighbors for node 1 1-element Vector{Int64}: 2 inneighbors(G, 1) # list of incoming neighbors for node 1 Int64[] ``` ### G.4.2 Distributions.jl We use the Distributions.jl package (version 0.24) to represent, fit, and sample from probability distributions: ```julia using Distributions dist = Categorical([0.3, 0.5, 0.2]) # create a categorical distribution data = rand(dist) # generate a sample 1 data = rand(dist, 2) # generate two samples 2-element Vector{Int64}: 1 3 μ, σ = 5.0, 2.5; # define parameters of a normal distribution dist = Normal(μ, σ) # create a normal distribution ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com ### G.4.3 JuMP.jl We use the JuMP.jl package (version 0.21) to specify optimization problems that we can then solve using a variety of solvers, such as those included in GLPK.jl and Ipopt.jl: ```julia julia> using JuMP julia> using GLPK julia> model = Model(GLPK.Optimizer) # create model and use GLPK as solver A JuMP Model Feasibility problem with: Variables: 0 Model mode: AUTOMATIC CachingOptimizer state: EMPTY_OPTIMIZER ``` © 2022 Massachusetts Institute of Technology, shared under a Creative Commons CC-BY-NC-ND license. 2024-02-06 20:54:49-08:00, comments to bugs@algorithmsbook.com Solver name: GLPK ```julia julia> @variable(model, x[1:3]) # define variables x[1], x[2], and x[3] 3-element Vector{JuMP.VariableRef}: x[1] x[2] x[3] julia> @objective(model, Max, sum(x) - x[2]) # define maximization objective julia> @constraint(model, x[1] + x[2] ≤ 3) # add constraint julia> @constraint(model, x[2] + x[3] ≤ 2) # add another constraint julia> @constraint(model, x[2] ≥ 0) # add another constraint x[2] >= 0 julia> optimize!(model) # solve julia> value.(x) # extract optimal values for elements in x 3-element Vector{Float64}: 3.0 0.0 2.0 ``` ### G.5 Convenience Functions There are a few functions that allow us to specify the algorithms in this book more compactly. The following functions are useful when working with dictionaries and named tuples: ```julia Base.Dict{Symbol,V}(a::NamedTuple) where V = Dict{Symbol,V}(n⇒v for (n,v) in zip(keys(a), values(a))) Base.convert(::Type{Dict{Symbol,V}}, a::NamedTuple) where V = Dict{Symbol,V}(a) Base.isequal(a::Dict{Symbol,<:Any}, nt::NamedTuple) = length(a) == length(nt) && all(a[n] == v for (n,v) in zip(keys(nt), values(nt))) ``` ```julia julia> a = Dict{Symbol,Integer}((a=1, b=2, c=3)) Dict{Symbol, Integer} with 3 entries: :a => 1 :b => 2 :c => 3 julia> isequal(a, (a=1, b=2, c=3)) true julia> isequal(a, (a=1, c=3, b=2)) true ``` We define `SetCategorical` to represent distributions over discrete sets: ``` struct SetCategorical{S} elements::Vector{S} # Set elements (could be repeated) distr::Categorical # Categorical distribution over set elements function SetCategorical(elements::AbstractVector{S}) where S weights = ones(length(elements)) return new{S}(elements, Categorical(normalize(weights, 1))) end function SetCategorical( elements::AbstractVector{S}, weights::AbstractVector{Float64} ) where S ℓ₁ = norm(weights,1) if ℓ₁ < 1e-6 || isnan(ℓ₁) return SetCategorical(elements) end distr = Categorical(normalize(weights, 1)) return new{S}(elements, distr) end end ``` ``` Distributions.rand(D::SetCategorical) = D.elements[rand(D.distr)] Distributions.rand(D::SetCategorical, n::Int) = D.elements[rand(D.distr, n)] function Distributions.pdf(D::SetCategorical, x) sum(e == x ? w : 0.0 for (e,w) in zip(D.elements, D.distr.p)) end ``` ```julia D = SetCategorical(["up", "down", "left", "right"],[0.4, 0.2, 0.3, 0.1]); rand(D) rand(D, 5) pdf(D, "up") ```
{"Source-Url": "https://algorithmsbook.com/files/appendix-g.pdf", "len_cl100k_base": 8661, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 66891, "total-output-tokens": 10505, "length": "2e13", "weborganizer": {"__label__adult": 0.00025200843811035156, "__label__art_design": 0.0005564689636230469, "__label__crime_law": 0.00022971630096435547, "__label__education_jobs": 0.0014047622680664062, "__label__entertainment": 0.00011414289474487303, "__label__fashion_beauty": 0.00011515617370605467, "__label__finance_business": 0.00022029876708984375, "__label__food_dining": 0.000354766845703125, "__label__games": 0.0008516311645507812, "__label__hardware": 0.0006060600280761719, "__label__health": 0.00034308433532714844, "__label__history": 0.00026702880859375, "__label__home_hobbies": 0.0001347064971923828, "__label__industrial": 0.0003237724304199219, "__label__literature": 0.0003476142883300781, "__label__politics": 0.00016367435455322266, "__label__religion": 0.00037384033203125, "__label__science_tech": 0.039276123046875, "__label__social_life": 0.00013244152069091797, "__label__software": 0.02386474609375, "__label__software_dev": 0.92919921875, "__label__sports_fitness": 0.0002238750457763672, "__label__transportation": 0.00028228759765625, "__label__travel": 0.0002058744430541992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26110, 0.15199]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26110, 0.26819]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26110, 0.67095]], "google_gemma-3-12b-it_contains_pii": [[0, 1444, false], [1444, 2454, null], [2454, 3229, null], [3229, 4519, null], [4519, 5424, null], [5424, 6418, null], [6418, 7684, null], [7684, 8645, null], [8645, 9673, null], [9673, 10802, null], [10802, 11807, null], [11807, 13300, null], [13300, 14712, null], [14712, 15833, null], [15833, 17012, null], [17012, 17798, null], [17798, 18618, null], [18618, 20113, null], [20113, 21537, null], [21537, 22924, null], [22924, 23523, null], [23523, 24956, null], [24956, 25991, null], [25991, 26110, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1444, true], [1444, 2454, null], [2454, 3229, null], [3229, 4519, null], [4519, 5424, null], [5424, 6418, null], [6418, 7684, null], [7684, 8645, null], [8645, 9673, null], [9673, 10802, null], [10802, 11807, null], [11807, 13300, null], [13300, 14712, null], [14712, 15833, null], [15833, 17012, null], [17012, 17798, null], [17798, 18618, null], [18618, 20113, null], [20113, 21537, null], [21537, 22924, null], [22924, 23523, null], [23523, 24956, null], [24956, 25991, null], [25991, 26110, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26110, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 26110, null]], "pdf_page_numbers": [[0, 1444, 1], [1444, 2454, 2], [2454, 3229, 3], [3229, 4519, 4], [4519, 5424, 5], [5424, 6418, 6], [6418, 7684, 7], [7684, 8645, 8], [8645, 9673, 9], [9673, 10802, 10], [10802, 11807, 11], [11807, 13300, 12], [13300, 14712, 13], [14712, 15833, 14], [15833, 17012, 15], [17012, 17798, 16], [17798, 18618, 17], [18618, 20113, 18], [20113, 21537, 19], [21537, 22924, 20], [22924, 23523, 21], [23523, 24956, 22], [24956, 25991, 23], [25991, 26110, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26110, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
23c4af7329d44ee9f348108856116e7720d7cf44
Sketched Answer Set Programming Sergey Paramonov, Christian Bessière, Anton Dries, Luc de Raedt To cite this version: HAL Id: lirmm-02310677 https://hal-lirmm.ccsd.cnrs.fr/lirmm-02310677 Submitted on 10 Oct 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Sketched Answer Set Programming Sergey Paramonov KU Leuven Leuven, Belgium sergey.paramonov@kuleuven.be Christian Bessiere LIRMM CNRS Montpellier, France bessiere@lirmm.fr Anton Dries KU Leuven Leuven, Belgium anton.dries@kuleuven.be Luc De Raedt KU Leuven Leuven, Belgium luc.deraedt@kuleuven.be Abstract—Answer Set Programming (ASP) is a powerful modeling formalism for combinatorial problems. However, writing ASP models can be hard. We propose a novel method, called Sketched Answer Set Programming (SkASP), aimed at facilitating this. In SkASP, the user writes partial ASP programs, in which uncertain parts are left open and marked with question marks. In addition, the user provides a number of positive and negative examples of the desired program behaviour. SkASP then synthesises a complete ASP program. This is realized by rewriting the SkASP program into another ASP program, which can then be solved by traditional ASP solvers. We evaluate our approach on 21 well known puzzles and combinatorial problems inspired by Karp’s 21 NP-complete problems and on publicly available ASP encodings. Index Terms—inductive logic programming, constraint learning, answer set programming, sketching, constraint programming, relational learning I. INTRODUCTION Many AI problems can be formulated as constraint satisfaction problems that can be solved by state-of-the-art constraint programming (CP) [34] or answer set programming (ASP) techniques [27]. Although these frameworks provide declarative representations that are in principle easy to understand, writing models in such languages is not always easy. On the other hand, for traditional programming languages, there has been significant attention for techniques that are able to complete [23] or learn a program from examples [17]. The idea of program sketching is to start from a sketched program and some examples to complete the program. A sketched program is essentially a program where some of the tests and constructs are left open because the programmer might not know what exact instruction to use. For instance, when comparing two variables X and Y, the programmer might not know whether to use X < Y or X ≤ Y or X > Y and write X ?= Y instead (while also specifying the domain of ?=, that is, which concrete operators are allowed). By providing a few examples of desired program behaviour and a sketch, the target program can then be automatically found. Sketching is thus a form of “lazy” programming as one does not have to fill out all details in the programs; it can also be considered as program synthesis although there are strong syntactic restrictions on the programs that can be derived; and it can be useful for repairing programs once a bug in a program has been detected. Sketching has been used successfully in a number of applications [24, 35, 19] to synthesise imperative programs. It is these capabilities that this paper brings to the field of ASP. As a motivating example assume one needs to solve the Peacefully Coexisting Armies of Queens, a version of the n-queens problem with black and white queens, where queens of the same color do not attack each other. One might come up with the following sketched program (where R_w (C_b) stand for the variable representing the row (column) of a white (black) queen): Sketch 1: Peacefully Coexisting Armies of Queens This program might have been inspired by a solution written in the constraint programming language Essence available from the CSP library [32]. Intuitively, the sketched ASP specifies constraints on the relationship between two queens on the rows (first rule), columns (second rule) and diagonals (third rule), but it expresses also uncertainty about the particular operators that should be used between the variables through the built-in alternatives for ?= (which can be instantiated to one of =, ≠, <, >, ≤, ≥) and for + (for arithmetic operations). When providing an adequate set of examples to the ASP, the SkASP solver will then produce the correct program. The key contributions of this paper are the following: 1) we adapt the notion of sketching for use with Answer Set Programming; 2) we develop an approach (using ASP itself) for computing solutions to a sketched Answer Set Program; 3) we contribute some simple complexity results on sketched ASP; and 4) we investigate the effectiveness and limitations of sketched ASP on a dataset of 21 typical ASP programs. II. ASP AND SKETCHING Answer Set Programming (ASP) is a form of declarative programming based on the stable model semantics [15] of logic programming [27]. We follow the standard syntax and semantics of ASP as described in the Potassco project [15]. A program is a set of rules of the form a ← a_1,...,a_k, not a_k+1,...,not a_n. A positive or negative atom is called a literal, a is a positive propositional literal, called a head, and for i between 1 and k, a_i is a positive propositional atom; and for i between k + 1 and n, not a_i is a negative propositional literal. The body is the conjunction of the literals. A rule of the form \( a \leftarrow \) is called a fact and abbreviated as \( a \). A rule without a head specified is called an integrity constraint (a is \( \perp \) in this case). Conditional literals, written as \( a : l_1, \ldots, l_n, \) and cardinality constraints, written as \( c_{\text{min}}[l_1, \ldots, l_n], c_{\text{max}} \), are used \( l_1, \ldots, l_n \) are literals here, and \( c_{\text{min}}, c_{\text{max}} \) are non-negative integers). A conditional atom holds if its condition is satisfied and a cardinality constraint is satisfied if between \( c_{\text{min}} \) and \( c_{\text{max}} \) literals hold in it. Furthermore, as ASP is based on logic programming and also allows for variables, denoted in upper-case, the semantics of a rule or expression with a variable is the same as that of its set of ground instances. We restrict the ASP language to the NP-complete subset specified here. For more details on ASP, see [13], [10]. We extend the syntax of ASP with sketched language constructions. Instead of allowing only atoms of the form \( p(t_1, \ldots, t_n) \), where \( p/n \) is a predicate and the \( t_i \) are terms (variables or constants), we now allow to use sketched atoms of the form \( ?q(t_1, \ldots, t_n) \) where \( ?q \) is a sketched predicate variable with an associated domain \( d_q \) containing actual predicates of arity \( n \). The meaning of the sketched atom \( ?q(t_1, \ldots, t_n) \) is that it can be replaced by any real atom \( p(t_1, \ldots, t_n) \) provided that \( p/n \in d_q \). It reflects the fact that the programmer does not know which \( p/n \) from \( d_q \) should be used. Sketched atoms can be used in the same places as any other atom. We also provide some syntactic sugar for some special cases and variants, in particular, we use a sketched inequality \( X \leftarrow Y \), a sketched arithmetic operator \( X \leftarrow Y \) (strictly speaking, this is not a sketched predicate but an operator, but we only make this distinction where needed), and sketched negation \( \neg p(X) \) (which is, in fact, a sketched operator of the form \( \neg \) for an atom, \( \neg \) for a new atom, which represents the negation of the original atom). The domain of \( X \leftarrow Y \) is the set \( \{\top, \bot, \neq, <, \geq, \leq, \} \), where \( \top \) is the atom that is always satisfied by its arguments, the domain of \( X \leftarrow Y \) is the set \( \{\bot, -\bot, +\bot, \} \) where \( \bot(a, b) \) is defined as \( a - b \), and the domain of \( \neg p \) is \( \{\bot, \} \). An example of sketched inequality can be seen in Line 1 of Figure 1C. An example of sketched predicates and negation in Line 2 of Figure 1A and sketched arithmetic in Line 3 of Sketch 1. A sketched variable is a sketched predicate, a sketched negation, a sketched inequality or a sketched arithmetic operator. The set of all sketched variables is referred to as \( S \). Predicate \( p \) directly positively (negatively) depends on \( q \) iff \( q \) occurs positively (negatively) in the body of a rule with \( p \) in the head or \( p \) is a sketched predicate and \( q \) is in its domain; \( p \) depends (negatively) on \( q \) iff \( (p, q) \) is in the transitive closure of the direct dependency relation. A sketch is stratified iff there is no negative cyclic dependency. We restrict programs to the stratified case. An example is a set of ground atoms. A preference is a function from \( \Theta \) (possible substitutions) to \( \mathbb{Z} \). A substitution \( \Theta \) is preferred over \( \Theta' \) given preferences \( f \) iff for all \( s_i \rightarrow d_i \in \Theta \) and \( s_i \rightarrow d_i' \in \Theta' \) it holds that \( f(s_i \rightarrow d_i) \geq f(s_i \rightarrow d_i') \) and at least one inequality is strict. First, when \( f(\Theta) \) is constant, all substitutions are equal and there are no preferences (all equally preferred). Because specifying preferences might impose an extra burden on the user, we also provide default preferences for the built-in sketched variables (like inequality, etc) cf. the experimental section. The Language of Sketched Answer Set Programming (SkASP) supports some of the language features of ASP. The language of SkASP has the following characteristics: - it allows for a set of rules of the form \( a \leftarrow b_1, \ldots, b_n, \neg c_1, \ldots, \neg c_m \); - predicates (such as a predicate \( p/n \) or comparison \( \leq \) and operators (such as arithmetic +, -, *), etc) in these rules can be sketched; aggregates can be used in the body of the rules as well (stratified; see Extension Section \[V]\); the SkASP program has to be stratified; the choice rules are not allowed. The key idea behind our method is that the SkASP program is rewritten into a normal ASP program (with choice rules, etc.) in order to obtain a solution through the use of an ASP solver. As we will see in Theorem 2: the language of SkASP stays within the complexity bounds of normal ASP, which makes the rewriting possible (SkASP $\Rightarrow$ ASP). Let us now formally introduce the problem of SkASP. **Definition 1** (The Problem of Sketched Answer Set Programming). Given a sketched answer set program $P$ with sketched variables $S$ of domain $D$ and preferences $f$, and positive and negative sets of examples $E^+$ and $E^-$, the Sketched Answer Set Problem is to find all substitutions $\theta : S \mapsto D$ preferred by $f$ such that $\text{PoE} \cup \{e\}$ has an answer for all $e \in E^+$ and for no $e \in E^-$. The decision version of SkASP asks whether there exists such a substitution $\theta$. ### III. REWRITING SCHEMA One might consider a baseline approach that would enumerate all instances of the ASP sketch, and in this way produce one ASP program for each assignment that could then be tested on the examples. This naive grounding and testing approach is, however, infeasible: the number of possible combinations one ASP program for each assignment that could then be tested on the examples. This naive grounding and testing approach is, however, infeasible: the number of possible combinations grows exponentially with the number of sketched variables. E.g., for the sketch of the Radio Frequency Problem \[7\] there are around $10^5$ possible assignments to the sketched variables. Multiplied by the number of examples, around a million ASP programs would have to be generated and tested. This is infeasible in practice. The key idea behind our approach is to rewrite a SkASP problem $(P, S, D, f, E^+, E^-)$ into an ASP program such that the original sketching program has a solution iff the ASP program has an answer set. This is achieved by 1) inserting decision variables into the sketched predicates, and 2) introducing example identifiers in the predicates. The original SkASP problem is then turned into an ASP problem on these decision variables and solutions to the ASP problem allow to reconstruct the SkASP substitution. The rewriting procedure has four major steps: example expansion, substitution generation, predicate reification and constraint splitting. (Here we follow the notation on meta-ASP already used in the literature \[21\], \[11\].) **Example Identifiers** To allow the use of multiple examples in the program, every relevant predicate is extended with an extra argument that represents the example identifier. The following steps are used to accommodate this in the program, denoted as meta$E(P, S, E^+, E^-)$. 1) Let $SP$ be the set of all predicates that depend on a predicate occurring in one of the examples. 2) Replace each literal $p(t_1, \ldots, t_n)$ for a predicate $p \in SP$ in the program $P$ by the literal $p(E, t_1, \ldots, t_n)$, where $E$ is a variable not occurring in the program. 3) Add the guard $\text{examples}(E)$ (the index of all pos./neg. examples) to the body of each rule in $P$. 4) For each atom $p(t_1, \ldots, t_n)$ in the $i$-th example, add the fact $p(i, t_1, \ldots, t_n)$ to $P$. 5) For each positive example $i$, add the fact $\text{positive}(i)$ to $P$, and for each negative one, the fact $\text{negative}(i)$. E.g., the rule in Line 2 of Figure \[1a\] becomes Line 9 of Figure \[1b\] and the example in Line 14 is rewritten as in Line 2. **Substitution Generation** We now introduce the decision variables, referred as meta$D(S, D)$: 1) For each sketched variable $s_i$ with domain $D_i$ $$1 \{ \text{decision}_{s_i}(X) = d_i(X) \}$$ 2) For each value $v$ in $D_i$, add the fact $d_i(v)$. This constraint ensures that each answer set has exactly one value from the domain assigned to each sketched variable. This results in a one-to-one mapping between decision atoms and sketching substitution $\theta$. An example can be seen in Lines 4 and 5 of Figure \[1b\]. **Predicate Reification** We now introduce the reified predicates, referred as meta$R(S, D)$ 1) Replace each occurrence of a sketched atom $s(t_1, \ldots, t_n)$ in a rule of $P$ with the atom reified$s(D, t_1, \ldots, t_n)$, and add $\text{decision}_s(D)$ to the body of the rule. 2) For each sketched variable $s$ and value $d_i$ in its domain, add the following rule to $P$: $$\text{reified}_s(d_i, X_1, \ldots, X_n) \leftarrow d_i(X_1, \ldots, X_n).$$ where the first argument is the decision variable for $s$. Thus, semantically reified$s(d_i, X_1, \ldots, X_n)$ is equivalent to $d_i(X_1, \ldots, X_n)$ and decision$s(d_i)$ indicates that the predicate $d_i$ has been selected for the sketched variable $s$. Notice that we focused here on the general case of a sketched predicate $?p(...).$ It is straightforward to adapt this for the sketched inequality, negation and arithmetic. Examples of reification can be seen in Lines 7 of Figure \[1b\] for the sketch $?q$ of the symmetrical predicate $X_1 = X_2$ and in Lines 11, 12 for reified negation. **Integrity Constraint Splitting** (referred as meta$C(P)$) 1) Replace each integrity constraint $\leftarrow \text{body}, \text{positive}(E)$ $$\text{negsat}(E) \leftarrow \text{body}, \text{negative}(E)$$ 2) And add the rule to the program: $$\leftarrow \text{negative}(E), \text{not negsat}(E).$$ This will ensure that all positives and none of the negatives have a solution. For example, the constraint in Line 4 of Figure \[1b\] is rewritten into a positive constraint in Lines 14, 15 and a negative in Lines 16, 17. Another important result is that the preferences do not affect the decision complexity. Proofs can be found in the supplementary materials. Theorem 1 (Sound and Complete Sketched Rewriting). A sketched ASP program $(P, S, D, f, E^+, E^-)$ has a satisfying substitution $\theta$ iff the meta program $T = metaE(P, S, E^+, E^-) \cup metaD(S, D) \cup metaR(S, D) \cup metaC(P)$ has an answer set. Interestingly, the sketched ASP problem is in the same complexity class as the original ASP program. Theorem 2 (Complexity of Sketched Answer Set Problem). The decision version of propositional SkASP is in NP. Proof. Follows from the encoding of SkASP into a fragment of ASP which is in NP. Dealing with preferences Preferences are, as we shall show in our experiments, useful to restrict the number of solutions. We have implemented preferences using a post-processing approach (which will also allow to apply the schema to other formalisms such as CP or IDP [8]). We first generate the set of all solutions $O$ (without taking into account the preferences), and then post-process $O$. Basically, we filter out from $O$ any solution that is not preferred (using tests on pairs $(o, o')$ from $O \times O$). The preferences introduce a partial order on the solutions. For example, assume $p(t)$ can take value $p_1(q_1)$ with preference of 1 and $p_2(q_2)$ with 2. If $(p_1, q_1)$ and $(p_2, q_1)$ are the only solutions, they are kept because they are incomparable – (1,2) is not dominated by (2,1) (and vice versa). If $(p_1, q_1)$ is also solution, $(p_1, q_2)$ and $(p_2, q_1)$ are removed because they are dominated by $(p_1, q_1)$. While the number of potential Answer Sets is in general exponential for a sketched ASP, the number of programs actually satisfying the examples is typically rather small (in our experiments, below 10000-20000). If that is not the case, the problem is under-constrained and it needs more examples. No user would be able to go over a million of proposed programs. IV. SYSTEM EXTENSION: AGGREGATES AND USE-CASE An aggregate $\#\text{agg}$ is a function from a set of tuples to an integer. For example,$\#\text{count}\{\text{Column, Row} : \text{queen}(\text{Column, Row})\}$ counts the number of instances $\text{queen}(\text{Column, Row})$ at the tuple level. Aggregates are often useful for modeling. However, adding aggregates to non-disjunctive ASP raises the complexity of an AS existence check, unless aggregate dependencies are stratified [12]. It is possible to add aggregates into our system under the following restrictions: stratified case, aggregates occur in the body in the form $N = \#\text{agg}\{\ldots\}$, sketched with the keyword $\#\text{agg}$ can be max, min, count and sum. This immediately allows us to model problems such as Equal Subset Sum (for details, see the repository), where one needs to split a list of values, specified as a binary predicate $\text{val}(\text{ID}, \text{Value})$ into two subsets, such that the sum of both subsets is equal. Essentially, we sketch the constraint of the form: $I - S1 \neq S2, S1 = \#(V, X: \text{val}(X, V), \text{subset1}(X))...$ Formally, each aggregate can be seen as an expression of the form: $S = \#\text{agg}\{Z_1, \ldots, Z_n : \text{cond}(X_1, \ldots, X_k, Y_1, \ldots, Y_h, Z_1, \ldots, Z_n)\}$ where $S$ is an integer output, and $Y_1, \ldots, Y_h$, shortened as $\vec{Y}$ ($X$ and $Z$ are the same kind of shortening) are bound to other atoms in the rule, to which we refer as external($\vec{Y}$) (“external” with respect to the condition in the aggregate; it is simply shortening for a conjunction of atoms, which share variables with the condition in the predicate). To give an example of $\vec{X}, \vec{Y}, \vec{Z}$ in a simple context: if we were to compute an average salary per department in a company, we might have written a rule of the form: $\text{avg\_sal}(A, D) :- A = \#\text{avg}(S, N : \text{salaries}(N, S, D)), \text{department}(D)$ Then, $\vec{Z}$ consists of the variable $S$ and $D$ is the external variable (with respect to the condition in the aggregate), i.e., $\vec{Y}$ and $\vec{X}$ is composed out of the variable $N$, since it is neither used in the aggregation, nor in the other atoms outside of the aggregate. A sketched aggregate $\#\text{agg}$ can be reified similarly to the regular sketched atoms, i.e.: $\text{reified}(S, \text{sum}, \vec{Y}) \leftarrow S = \#\text{sum}(\vec{Z} : \text{cond}(\vec{X}, \vec{Y}, \vec{Z}), \text{external}(\vec{Y})$ similarly for other aggregate functions; the same rules, e.g., the example extension, apply to aggregate reification. With aggregates we can sketch a significantly larger class of problems. Consider the problem from the Functional Pearls Collection: “Finding celebrities problem” [51]. Problem statement: “Given a list of people at a party and for each person the list of people they know at the party, we want to find the celebrities at the party. A celebrity is a person that everybody at the party knows but that only knows other celebrities. At least one celebrity is present at the party.” The sketch core looks as follows (names are shortened): $n(N) :- N = \#(P : \text{p}(P))$ $:- \text{c}(C), \text{p}(C), \text{n}(N), S = \#(P : \text{k}(P, C), \text{p}(P)), S < N-1$ $:- \text{c}(C), \text{p}(C), \text{not c}(P), \text{k}(C, P)$ The last rule is an integrity constraint verifying that no celebrity, $c$, knows a person who is not a celebrity. The first line sketches a rule that should find what aggregation metric on the people (unary predicate person, $p$) should be used in the problem. The sketched rule in the second line makes use of this metric, denoted as $n$, and says that an aggregation should be performed on the binary ”knows” predicate, $k$, (indicating that two persons know each other); so the outcome of the sketched aggregation on the connection between people should be compared to an overall metric on all people individually. V. EXPERIMENTAL EVALUATION For the experimental evaluation we have created a dataset consisting of 21 classical combinatorial problems among ASP code: hakank.org/answer_set_programming/finding_celebrities.lp4 which most are NP-complete. For the problem list and precise sketch specifications used in the experiments, we refer to Table I. All problems, their code, and implementation details, can be found in the accompanying Github repository: https://github.com/SergeyParamonov/sketching **Dataset of Sketches.** The key challenge in evaluating program synthesis techniques such as SkASP is the absence of benchmark datasets (as available in more typical machine learning tasks). At the same time, although there are many example ASP programs available in blogs, books or come with software, these typically employ advanced features (such as incremental grounding, optimization or external sources) which are not supported by SkASP as yet. Therefore we had to design our own dataset in a systematic way (and put it in the public domain). The dataset is based on a systematic concept (the 21 problems by Karp). When we could find encodings for these problems (such as Sudoku in Figure 1c from [18] and Hamiltonian Cycle in Figure 1a from [13]), we took these problems, in all other cases we developed a solution according to the standard generate and test development methodology of ASP. Specifically (see Q3) we looked for different encodings in the public domain of ASPs favorite – the N-queens problem (these encoding can tackle even its NP-complete version [16]). After creating all the ASP programs, we turned them into sketches by looking for meaningful opportunities to use sketched variables. We introduced sketched variables to replace operators (equalities and inequalities), to replace arithmetic (such as plus and minus) and to decide whether to use negated literals or not, and to make abstraction of predicates for which another predicate existed with the same signature. Finally, we had to select the examples in a meaningful way, that is, we selected examples that would be informative (as a user of SkASP would also do). Positive examples were actually selected more or less random, negative examples are meant to violate one or more of the properties of the problem. Furthermore, we also tried to select examples that carry different information (again as a user of SkASP would do). We selected between 4 and 7 examples for each model. Where relevant in the experiments, we sampled the sketched variables (e.g. Q3) or the examples (e.g. Q1) **Experimental questions** are designed to evaluate how usable is SkASP in practice. Users want in general to provide only a few examples (Q1-Q3), to obtain a small number of solutions (ideally only one) (Q1-Q3), the examples should be small (Q4), the solutions should be correct (all), want to know whether and when to use preferences (Q5), and how robust the technique is to changes in the encoding (Q5) as it is well known in ASP that small changes in the encoding can have large effects. Finally, they are typically interested in how the learned programs change as the sketches become more complex (Q5). With this in mind, we have designed and investigated the following experimental questions: - **Q1:** What is the relationship between the number of examples and the number of solutions? How many examples does it take to converge? - **Q2:** Are preferences useful? - **Q3:** What is the effect of the number of sketched variables on the convergence and correctness of the learned programs? - **Q4:** Do models learned on examples with small parameter values generalize to models with larger parameter values? - **Q5:** What is the effect of encoding variations on the number of solutions and their correctness? **Implementation details and limitations.** The SkASP engine is written in Python 3.4 and requires pyasp. All examples have been run on a 64-bit Ubuntu 14.04, tested in Clingo 5.2.0. The current implementation does not support certain language constructs such as choice rules or optimization. We use the default preferences in the experiments for the built-in inequality sketch $X ?= Y$: namely $=$ and $\neq$ have equal maximal preference. A user can redefine the preferences. Our experiments indicate that for other sketched types (e.g., arithmetic, etc) no default preferences are needed. We investigate Q1 by measuring the impact of the number of examples on the number of solutions of the 21 SkASP problems. An interesting observation is that even if the user wants to solve, say the Latin Square $20 \times 20$, she does not need to provide examples of size $20 \cdot 20 = 400$. She can simply provide $3 \times 3$ examples and our SkASP problem will learn the generic Latin Square program (see Figure 13). Figure 2a shows how the number of solutions for some of our 21 SkASP problems depends on the number of examples. In some cases, 7 examples are sufficient to converge to a single solution e.g., FAP, B&W Queens. On some other problems, however, after 7 examples there still remain many solutions (on average 18 for problems that do not converge). Figure 2a reports the same information as Figure 2a for all 21 problems: the average number of solutions; the average on the 9 that converge within 7 examples, referred to as the easy problems; and the average on the 12 that still have several solutions after 7 examples, referred to as the <table> <thead> <tr> <th>Problem</th> <th># Sketched</th> <th># ?=</th> <th># ?+</th> <th># ?not</th> <th># ?p</th> <th># Rules</th> </tr> </thead> <tbody> <tr> <td>Graph Clique</td> <td>3</td> <td>1</td> <td>0</td> <td>0</td> <td>2</td> <td>4</td> </tr> <tr> <td>3D Matching</td> <td>3</td> <td>3</td> <td>0</td> <td>0</td> <td>0</td> <td>1</td> </tr> <tr> <td>Graph Coloring</td> <td>8</td> <td>4</td> <td>0</td> <td>0</td> <td>3</td> <td>2</td> </tr> <tr> <td>Domination Set</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>Exact Cover</td> <td>7</td> <td>2</td> <td>0</td> <td>1</td> <td>4</td> <td>3</td> </tr> <tr> <td>Sudoku</td> <td>5</td> <td>4</td> <td>0</td> <td>1</td> <td>0</td> <td>4</td> </tr> <tr> <td>B&amp;W Queens</td> <td>5</td> <td>3</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Hit Set</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>2</td> </tr> <tr> <td>FAP</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>2</td> </tr> <tr> <td>Feedback Arc Set</td> <td>4</td> <td>0</td> <td>0</td> <td>2</td> <td>2</td> <td>3</td> </tr> <tr> <td>Latin Square</td> <td>4</td> <td>4</td> <td>0</td> <td>0</td> <td>0</td> <td>2</td> </tr> <tr> <td>Edge Domination</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>5</td> </tr> <tr> <td>FAP</td> <td>5</td> <td>3</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Set Packing</td> <td>4</td> <td>2</td> <td>0</td> <td>0</td> <td>2</td> <td>1</td> </tr> <tr> <td>Clique Cover</td> <td>4</td> <td>3</td> <td>0</td> <td>1</td> <td>0</td> <td>3</td> </tr> <tr> <td>Feedback Set</td> <td>5</td> <td>0</td> <td>0</td> <td>5</td> <td>0</td> <td>3</td> </tr> <tr> <td>Edge Coloring</td> <td>3</td> <td>3</td> <td>0</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Set Splitting</td> <td>5</td> <td>2</td> <td>0</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>N Queens</td> <td>6</td> <td>4</td> <td>2</td> <td>0</td> <td>0</td> <td>3</td> </tr> <tr> <td>Vertex Cover</td> <td>3</td> <td>0</td> <td>0</td> <td>1</td> <td>2</td> <td>4</td> </tr> <tr> <td>Subg. Isomorph.</td> <td>5</td> <td>2</td> <td>0</td> <td>1</td> <td>2</td> <td>4</td> </tr> </tbody> </table> Table I: Dataset summary: the number of sketched variables, of rules, of particular types of sketched variables, e.g., “# ?not”, indicates how many atoms with the sketched negation are in the program. hard problems. When SkASP does not converge to a unique solution, this leaves the user with choices, often amongst equivalent ASP programs, which is undesirable. For problems that do not converge after a few examples, we propose to use preferences, as provided by our SkASP framework. We use the default preference described earlier. We investigate Q2 by measuring again the impact of the number of examples on the number of solutions. In Figure 2c, we observe that all problems have converged in less than 7 examples (under default preferences). The impact of preferences on the speed of convergence is even more visible on the whole set of problems, as reported in Figure 2b. The number of solutions with preferences is smaller, and often much smaller than without preferences, whatever the number of examples provided. With preferences, all our 21 problems are learned with 7 examples. To analyze the number of solutions in Q3, we look into the convergence of FAP by varying the number of sketched variables. The original sketched program of FAP contains 5 sketched variables. We vary it from 2 to 5 by turning 3, 2, 1, or 0 sketched variables into their actual value (chosen randomly and repeated over multiple runs). As expected, in Figure 2d, we observe that the more there are sketched variables in the SkASP, the slower the number of solutions decreases. Furthermore, the number of sketched variables has a greater impact on the convergence without preferences, as we see in Figure 2e. After 3-5 examples under preferences we have fewer than 10 solutions, while without preferences there are still dozens or hundreds of solutions. To analyze correctness in Q3, we need first to define it. Informally, we mean that the program classifies arbitrary examples correctly, i.e., positive as positive, etc. A typical metric to measure this is accuracy. However, there are no well defined arbitrary positive and negative examples for the most problems: what is an arbitrary random example for Feedback Arc Cover? Problems like Sudoku and N-queens do have standard examples because they are parameterized with a single parameter, which has a default value. Furthermore, for the standard 8-queens we know all solutions analytically, i.e., 92 combinations. Another issue is that the negative and positive classes are unbalanced. The usual way to address this issue is to use precision, i.e., \(\frac{\text{True Positive}}{\text{True Positive} + \text{False Positive}}\). (Recall is typically one because the incorrect programs produce way too many solutions that include the correct ones.) In Figure 2a we see that in all cases we were able to reach the correct solution (here the locations of sketched variables were fixed as specified in the dataset); while increasing the number of sketched variables generally decreases the precision. To investigate \(Q_3\), we have used the Latin Square from Listing 14. We have used examples for Latin Square 3 \(\times\) 3, and verified its correctness on Latin Square 4 \(\times\) 4 (which can be checked analytically because all solutions are known). We have discovered, that there is an inverse dependency between number of solutions and accuracy, see Figure 3a. This happens because there are typically very few useful or “intended” programs while there are lot of incorrect ones. To investigate \(Q_5\), we have focused on the N-queens problem and collected several encodings from multiple sources: Potassco, Hakank.org, an ASP course by Tran Cao Son and our encoding. Whereas all the encodings model the same problem they show significant variations in expressing constraints. To reduce the bias in how the sketched variables are introduced and systematically measure the parameters, we pick sketched variables randomly (inequalities and arithmetic) and use the same examples from our dataset (randomly picking the correct amount) for all models. In Figure 5a while there is a certain variation in the number of solutions, they demonstrate similar behavior. For each encoding we have introduced 5 sketched variables and measured the number of solutions and precision. In Figure 5b we see that there is indeed a slight variation in precision, with 3 out of 4 clearly reaching above 90% precision, one reaching 100% and one getting 82%. Thus, despite variations in encoding, they generally behave similarly on the key metrics. The results have been averaged over 100 runs. Overall, we observe that only few examples are needed to converge to a unique or a small group of equivalent solutions. An example where such equivalent solutions are found is the edge coloring problem, where two equivalent (for undirected graphs) constraints are found: \[ \text{color}(X,Y_1,C), \text{color}(X,Y_2,C), Y_1 \neq Y_2. \] \[ \text{color}(X_1,Y,C), \text{color}(X_2,Y,C), X_1 \neq X_2. \] For this problem these two constraints are equivalent and cannot be differentiated by any valid example. An interesting observation we made on these experiments is that the hardness (e.g., in terms of runtime) of searching for a solution of a problem is not directly connected to the hardness of learning the constraints of this problem. This can be explained by the incomparability of the search spaces. SkASP searches through the search space of sketched variables, which is usually much smaller than the search space of the set of decision variables of the problem to learn. VI. RELATED WORK The problem of sketched ASP is related to a number of topics. First, the idea of sketching originates from the area of programming languages, where it relates to so called self-completing programs [25], typically in C [24] and in Java [19], where an imperative program has a question mark instead of a constant and a programmer provides a number of examples to find the right substitution for it. While sketching has been used in imperative programming languages, it has – to the best of the authors’ knowledge – never been applied to ASP and constraint programming. What is also new is that the sketched ASP is solved using a standard ASP solver, i.e., ASP itself. Second, there is a connection to the field of inductive (logic) programming (ILP) [9], [28], [17]. An example is meta-interpretive learning [29], [30] where a Prolog program is learned based on a set of higher-order rules, which act as a kind of template that can be used to complete the program. However, meta-interpretive learning differs from SkASP in that it induces full programs and pursues as other ILP methods a search- and trace-based approach guided by generality, whereas SkASP using a constraint solver (i.e., ASP itself) directly. Furthermore, the target is different in that ASPs are learned, which include constraints. SKASP relates to meta-interpretation in ASP [11] in rule and decision materialization. The purpose is, however, different: they aim at synthesizing a program of higher complexity (\(\Sigma_1\)) given programs of lower complexity (\(NP\) and Co-NP). There are also interesting works in the intersection of ILP, program synthesis and ASP [21], [23], [33]. The ILASP system [22] learns an ASP program from examples, and a set of modes, while minimizing a metric, typically the number of atoms. This program, learned completely from scratch, is not necessarily the best program from the user’s point of view and may limit the possibility to localize the uncertainty based on the user’s knowledge of the problem. Indeed, if all sketched predicates are added in the modes with corresponding background knowledge, then the set of hypotheses of sketched ASP is a subset of ILASP. However, if we specify a sketched constraint \(- p(X), q(Y), X \neq Y\) with the negative example \{p(1), q(2)\} as modes for ILASP [22], it would learn a program like \(- p(X)\) (minimal program), but that is clearly not the program intended by the sketch. Furthermore, we compute all preferred programs instead of a single solution. Third, there is also work on constraint learning, where the systems such as CONACQ [4], [2] and QUACQ [3] learn a set of propositional constraints, and ModelSeeker [1] learns global constraints governing a particular set of examples. The subject has also been investigated in ILP setting [20]. However, the idea in all these approaches is to learn the complete specification of CSPs from scratch. Our setting is probably more realistic from a user perspective as it allows to use the knowledge that the user no doubt possesses about the underlying problem, and also requires much less examples. On the other hand, SkASP also makes, as other sketching approaches, the strong assumption that the intended target program is an instance of the sketched one. This may not always be true, for instance, when rules are missing in the program. This is an interesting issue for further research. Fourth, our approach is related to debugging of ASP [14]. Unlike SkASP such debuggers can be used to locate bugs, but typically do not provide help in fixing them. On the other hand, once a bug is identified, SkASP could be used to repair it by introducing a sketch and a number of examples. The approach of [20] is based on classical ILP techniques of generalization and specification and does not provide the freedom to indicate uncertain parts of the program. VII. DISCUSSION AND CONCLUSIONS Our contribution is four-fold: we have introduced the problem of sketched ASP; we have provided a rewriting schema for SkASP; we have created a dataset of sketches and we have evaluated our approach empirically demonstrating its efficiency and effectiveness. User interaction is an interesting future direction, namely to suggest constraints and examples. For the former, if we are not able to reject a negative example, we can construct a constraint that would reject the negative examples and none of the positive examples. As for the examples, if we have two solutions to a problem, we can generate an example discriminating between them and ask user to clarify it, while this might not always be possible, since symmetric assignments might lead to semantically identical programs. In practice, however, this might be an important addition to simplify sketching for end users. Another direction is to incorporate non-constant preference handling into the model using the extensions of ASP for preference handling, such as asprin [6]. REFERENCES 3 During the experiments, we stumbled upon a peculiar bug. One ASP encoding that we discovered in a public repository worked mostly by pure luck. The following constraint :-queen(X1,Y1),queen(X2,Y2),X1<X2,abs(Y1-X1)==abs(Y2-X2). 4 works because abs is not actually absolute value but an interpreted function, essentially it checks X == Y, and that is indeed the found solution. (This kind of bugs would be extremely hard to find using traditional debuggers, since technically the encoding produced correct solutions.). Also, while working on the aggregate extension use-case, we discovered a subtle bug: the case of a single celebrity was not handled correctly. In both cases, the author has been contacted and models have been updated. [23] Law, M., Russo, A., Broda, K.: Iterative learning of answer set programs from context dependent examples. TPLP 16(5-6), 834–848 (2016), [https://doi.org/10.1017/S1471068416000351]
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-02310677/file/ictai18.pdf", "len_cl100k_base": 10494, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 33354, "total-output-tokens": 12466, "length": "2e13", "weborganizer": {"__label__adult": 0.00033402442932128906, "__label__art_design": 0.0004000663757324219, "__label__crime_law": 0.0003979206085205078, "__label__education_jobs": 0.0015764236450195312, "__label__entertainment": 7.450580596923828e-05, "__label__fashion_beauty": 0.00016582012176513672, "__label__finance_business": 0.00022232532501220703, "__label__food_dining": 0.0003581047058105469, "__label__games": 0.0005640983581542969, "__label__hardware": 0.0007805824279785156, "__label__health": 0.0005655288696289062, "__label__history": 0.0002560615539550781, "__label__home_hobbies": 0.00013363361358642578, "__label__industrial": 0.0004830360412597656, "__label__literature": 0.0003266334533691406, "__label__politics": 0.0002658367156982422, "__label__religion": 0.0005350112915039062, "__label__science_tech": 0.044769287109375, "__label__social_life": 0.00011473894119262697, "__label__software": 0.007617950439453125, "__label__software_dev": 0.93896484375, "__label__sports_fitness": 0.0002987384796142578, "__label__transportation": 0.0005502700805664062, "__label__travel": 0.0001908540725708008}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45199, 0.03163]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45199, 0.64691]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45199, 0.88846]], "google_gemma-3-12b-it_contains_pii": [[0, 1005, false], [1005, 6041, null], [6041, 10609, null], [10609, 16580, null], [16580, 22625, null], [22625, 29684, null], [29684, 31506, null], [31506, 37859, null], [37859, 45199, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1005, true], [1005, 6041, null], [6041, 10609, null], [10609, 16580, null], [16580, 22625, null], [22625, 29684, null], [29684, 31506, null], [31506, 37859, null], [37859, 45199, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45199, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45199, null]], "pdf_page_numbers": [[0, 1005, 1], [1005, 6041, 2], [6041, 10609, 3], [10609, 16580, 4], [16580, 22625, 5], [22625, 29684, 6], [29684, 31506, 7], [31506, 37859, 8], [37859, 45199, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45199, 0.109]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4304710960e08c7f75af7979ff5af83dce8fe761
Title EmStar: A Software Environment for Developing and Deploying Wireless Sensor Networks Permalink https://escholarship.org/uc/item/9tv972pr Authors Girod, Lewis Elson, J Cerpa, Alberto E et al. Publication Date 2004-06-27 Peer reviewed EmStar: a Software Environment for Developing and Deploying Wireless Sensor Networks Lewis Girod Jeremy Elson Alberto Cerpa Thanos Stathopoulos Nithya Ramanathan Deborah Estrin Center for Embedded Networked Sensing University of California, Los Angeles Los Angeles, CA 90095 USA {girod,jelson,cerpa,thanos,nithya,destrin}@cs.ucla.edu Abstract Many Wireless Sensor Network (WSN) applications are composed of a mixture of deployed devices with varying capabilities, from extremely constrained 8-bit “Motes” to less resource-constrained 32-bit “Microservers”. EmStar is a software environment for developing and deploying complex WSN applications on networks of 32-bit embedded Microserver platforms, and integrating with networks of Motes. EmStar consists of libraries that implement message-passing IPC primitives, tools that support simulation, emulation, and visualization of live systems, both real and simulated, and services that support networking, sensing, and time synchronization. While EmStar’s design has favored ease of use and modularity over efficiency, the resulting increase in overhead has not been an impediment to any of our current projects. 1 Introduction The field of wireless sensor networks (WSNs) is growing in importance [1], with new applications appearing in the commercial, scientific, and military spheres, and an evolving family of platforms and hardware. One of the most promising signs in the field is a growing involvement by researchers outside the networking systems field who are bringing new application needs to the table. A recent NSF Workshop report [4] details a number of these needs, building on early experience with deployments (e.g. GDI [7], CENS [23], James Reserve [26]). Many of these applications lead to “tiered architecture” designs, in which the system is composed of a mixture of platforms with different costs, capabilities and energy budgets [5] [21]. Low capability nodes, often Crossbow Mica Motes [24] running TinyOS [17], can perform simple tasks and provide long life at low cost. The high capability nodes, or Microservers, generally consume more energy, but in turn can run more complex software and support more sophisticated sensors. EmStar is a software environment targeted at Microserver platforms. Microservers, typically iPAQ or Crossbow Stargate platforms, are central to several new applications at CENS. The Extensible Sensing System (ESS) employs Microservers as data sinks to collect and report microclimate data at the James Reserve. A proposed 50-node seismic network will use Stargates to measure and report seismic activity using a high-precision multichannel Analog to Digital Converter (ADC). Ongoing research in acoustic sensing uses iPAQ hardware to do beamforming and animal call detection. Although EmStar systems do not target Motes as a platform, EmStar systems can easily interoperate with Motes and Mote networks. In this paper, we intend to show how EmStar addresses the needs of WSN applications. To motivate this discussion, Figure 1 details a hypothetical application for which EmStar is well-suited. In this example, several nodes collaborate to acoustically localize an animal based on its call—an improved version of our system described in [8]. The large dashed box shows how the system might be implemented by combining existing EmStar components (gray boxes) with hypothetical application-specific components (light gray dashed boxes). Because EmStar systems are composed from small reusable components, it is easy to plug new application-specific components into many different layers of the system. Although most of the implemented components in the diagram are described in more detail later in the paper, we will briefly introduce them here. The emrun module serves as a management and watchdog process, starting up, monitoring, and shutting down the system. The emproxy module is a gateway to a debugging and visualization system. The udp, neighbors, and MicroDiffusion modules implement a network stack designed to work in the context of wireless links characterized by highly variable link quality and network topology. The timehist, syncd, and audiod modules together implement an audio sampling service that supports accurate correlation of time series across a set of nodes. The hypothetical modules include FFT, which computes a streaming Fourier transform of the acoustic input, detect, which is designed to detect a particular acoustic signature, and collabdetect, which orchestrates collaborative detection across several nodes. This application demonstrates several of the attributes that are special to WSNs. First, the nodes in the system have a higher probability of failure or disconnection than many Internet-based systems. Wireless connectivity and network topology can vary greatly, and systems deployed “in the wild” are also subject to hardware failures with higher probability. While Internet distributed systems often have low standards of client reliability, they typically assume a “core” of high reliability components that is not always present in a WSN. Second, the digital signal processing (DSP) algorithms running on each node are complex and must work for a broad set of inputs that is difficult to characterize. In practice, this means that certain unexpected conditions may cause unforeseen error conditions. Fault tolerance and layers of filtering are needed to absorb these transients. Third, energy considerations, along with aforementioned properties of wireless, influence the design of networking primitives. These issues favor soft state and hop-by-hop protocols over end-to-end abstractions. Energy considerations may also necessitate system-wide coordination to duty cycle the node. While many of the these issues are similar to those addressed by TinyOS [17], EmStar is better suited to applications built on higher performance platforms. 2 Tools and Services EmStar incorporates many tools and services germane to the creation of WSN applications. In this section, we briefly describe these tools and services, without much implementation detail. In Section 3, we detail key building blocks used to implement these tools. Then, in Section 4 we show how the implementation makes use of the building blocks. 2.1 EmStar Tools EmStar tools include support for deployment, simulation, emulation, and visualization of live systems, both real and simulated. EmSim/EmCee Transparent simulation at varying levels of accuracy is crucial for building and deploying large systems [9] [11]. Together, EmSim and EmCee comprise several accuracy regimes. EmSim runs many virtual nodes in parallel, in a pure simulation environment that models radio and sensor channels. EmCee runs the EmSim core, but provides an interface to real low-power radios instead of a modeled channel. The array of radio transceivers used by EmCee is shown in Figure 2(b). These simulation regimes speed development and debugging; pure simulation helps to get the code logically correct, while emulation in the field helps to understand environmental dynamics before a real deployment. Simulation and emulation do not eliminate the need to debug a deployed system, but they do tend to reduce it. In all of these regimes, the EmStar source code and configuration files are identical to those in a deployed system, making it painless to transition among them during development and debugging. This also eliminates accidental code differences that can arise when running in simulation requires modifications. Other “real-code” simulation environments include TOSSim [11] and SimOS [20]. EmView/EmProxy EmView is a graphical visualizer for EmStar systems. Figure 2(a) shows a screen-shot of EmView displaying real-time state of a running emulation. Through an extensible design, developers can easily add “plugins” for new applications and services. EmView uses a UDP protocol to request status updates from real or simulated nodes. Although the protocol is only best-effort, the responses are delivered with low latency, such that EmView captures real-time system dynamics. EmProxy is a server that runs on a node or as part of a simulation, and handles requests from EmView. Based on the request, EmProxy will monitor node status and report report back changes in real time. **EmRun** EmRun starts, stops, and manages running services in EmStar. It processes a config file that specifies how the EmStar services are “wired” together, and starts the system up in dependency order, maximizing parallelism. EmRun also maintains a control channel to each child process that enables it to monitor process health (rescue dead or stuck processes), initiate graceful shutdown, and receive notification when starting up that initialization is complete. Log messages emitted by EmStar services are processed centrally by EmRun and exposed to interactive clients as in-memory log rings with runtime-configurable loglevels. ### 2.2 EmStar Services EmStar services include support for networking, sensing, and time synchronization. **Link and Neighborhood Estimation** Wireless channels have a significant “gray zone” where connectivity is unreliable and highly time-varying [6]. Node failures are also common. Therefore, applications are brittle when they assume the topology is pre-configured. Dynamic neighbor discovery is a basic service needed by all collaborative applications if they are to be robust. Potential collaborators must be discovered at run-time. EmStar’s Neighbors service monitors links and provides applications with a list of active, reliable nodes. Applications are notified when the list changes so that they can take action in response to environmental changes. The LinkStats service goes one step further: in exchange for slightly more packet overhead, it provides much finer-grained reliability statistics. This can be useful, for example, to a routing algorithm that weights its path choices by link reliability. **Time Synchronization** The ability to relate the times of events on different nodes is critical to most distributed sensing applications, especially those interested in correlation of high-frequency phenomena. The TimeSync service provides a mechanism for converting among CPU clocks (i.e. `gettimeofday()`) on neighboring nodes. Rather than attempt to synchronize the clocks to a specific “master”, TimeSync estimates conversion parameters that enable a timestamp from one node to be interpreted on another node. Timesync can also compute relations between the local CPU clock and other clocks in the system, such as sample indices from an ADC or the clocks of other processor modules [3]. **Routing** EmStar supports several types of routing: Flooding, Geographical, Quad-Tree, and Diffusion. One of the founding principles of EmStar is that innovation in routing and hybrid transport/routing protocols are key research areas in the development of wireless sensor network systems. EmStar “supports” several routing protocols, but it also makes it easy to invent your own. For example, the authors of Directed Diffusion [16] [18] have ported diffusion to run on top of EmStar. ### 2.3 EmStar Device Support EmStar includes native support for a number of devices, including sensors and radio hardware. **HostMote and MoteNIC** EmStar systems often need to act as a gateway to a network of low-energy platforms such as Mica Motes running TinyOS. The HostMote service implements a serial line protocol between a Mote and an EmStar node. HostMote provides an interface to configure the attached Mote and an interface that demultiplexes Mote traffic to multiple clients. MoteNIC is a packet relay service built over HostMote. MoteNIC provides a standard EmStar data link interface, and pipes the traffic to software on the attached Mote that relays those packets onto the air. **Audio Server** The Audio service provides buffered and continuous streaming interfaces to audio data sampled by sound hardware. Applications can use the Audio service to acquire historical data from specific times, or to receive a stream of data as it arrives. Through integration with the TimeSync service, an application can relate a specific series of samples on one node to a series taken at the same time on another node. The ability to acquire historical data is crucial to implementing triggering and collaboration algorithms where there may be a significant nondeterministic delay in communication due to channel contention, multihop communication, duty cycling, and other sources of delay. ### 3 Building Blocks In this section, we will describe in more detail the building blocks that enabled us to construct the EmStar suite of tools and services. EmStar systems encapsulate logically separable modules within individual processes, and enable communication among these modules through message passing via device files. This structure provides for fault isolation and independence of implementation among services and applications. In principle, EmStar does not specify anything about the implementation of its modules, apart from the POSIX system call interface required to access device files. For example, most EmStar device interfaces can be used interactively from the shell, and EmStar servers could be implemented in any language that supports the system call interface. In practice, there is much to be gained from using and creating standard libraries. In the case of EmStar we have implemented these libraries in C, and we have adopted the GLib event framework to manage select() and to support timers. Using the event framework we encapsulate complex protocol mechanisms in libraries, and integrate them without explicit coordination. The decision to use C, GLib, and the POSIX interface was designed to minimize the effort required to integrate EmStar with arbitrary languages, implementation styles, and legacy codebases. We will now describe some key building blocks in more detail: the EmStar IPC mechanisms and associated libraries. We will explain them in terms of what they do, how they work, and how they are used. 3.1 FUSD FUSD, the Framework for User-Space Devices, is essentially a microkernel extension to Linux. FUSD allows device-file callbacks to be proxied into user-space and implemented by user-space programs instead of kernel code. Though implemented in userspace, FUSD drivers can create device files that are semantically indistinguishable from kernel-implemented /dev files, from the point of view of the processes that use them. FUSD follows in the tradition of microkernel operating systems that implement POSIX interfaces, such as QNX [29] and GNU HURD [25]. As we will describe in later sections, this capability is used by EmStar modules for both communication with other modules and with users. Of course, many other IPC methods exist in Linux, including sockets, message queues, and named pipes. We have found a number of compelling advantages in using using user-space device drivers for IPC among EmStar processes. For example, system call return values come from the EmStar processes themselves, not the kernel; a successful write() guarantees that the data has reached the application. Traditional IPC has much weaker semantics, where a successful write() means only that the data has been accepted into a kernel buffer, not that it has been read or acknowledged by an application. FUSD-based IPC obviates the need for explicit application-level acknowledgment schemes built on top of sockets or named pipes. FUSD-driven devices are a convenient way for applications to transport data, expose state, or be configured in a convenient, browseable, named hierarchy—just as the kernel itself uses the /proc filesystem. These devices can respond to system calls using custom semantics. For example, a read from a packet-interface device (Section 3.2.2) will always begin at a packet boundary. The customization of system call semantics is a particularly powerful feature, allowing surprisingly expressive APIs to be constructed. We will explore this feature further in Section 3.2. 3.1.1 FUSD Implementation The proxying of kernel system calls is implemented using a combination of a kernel module and cooperating user-space library. The kernel module implements a device, /dev/fusd, which serves as a control channel between the two. When a user-space driver calls fusd_register(), it uses this channel to tell the FUSD kernel module the name of the device being registered. The FUSD kernel module, in turn, registers that device with the kernel proper using devfs, the Linux device filesystem. Devfs and the kernel do not know anything unusual is happening; it appears from their point of view that the registered devices are simply being implemented by the FUSD module. FUSD drivers are conceptually similar to kernel drivers: a set of callback functions called in response to system calls made on file descriptors by user programs. In addition to the device name, fusd_register() accepts a structure full of pointers to callback functions, used in response to client system calls—for example, when another process tries to open, close, read from, or write to the driver’s device. The callback functions are generally written to conform to the standard definitions of POSIX system call behavior. In many ways, the user-space FUSD callback functions are identical to their kernel counterparts. When a client executes a system call on a FUSD-managed device (e.g., open() or read()), the kernel activates a callback in the FUSD kernel module. The module blocks the calling process, marshals the arguments of the system call, and sends a message to the user-space device managing the target device. In user-space, the library half of FUSD unmarshals the message and calls the user-space callback that the FUSD driver passed to fusd_register(). When that user-space callback returns a value, the process happens in reverse: the return value and its side-effects are marshaled by the library and sent to the kernel. The FUSD kernel module unmarshals the message, matches it with the corresponding outstanding request, and completes the system call. The calling process is completely unaware of this trickery; it simply enters the kernel once, blocks, unblocks, and returns from the system call—just as it would for a system call to a kernel-managed device. One of the primary design goals of FUSD is stability. A FUSD driver cannot corrupt or crash any other part of the system, either due to error or malice. Of course, a buggy driver may corrupt itself (e.g., due to a buffer overrun). However, strict error checking is implemented at the user/kernel boundary, which prevents drivers from corrupting the kernel or any other user-space process—including other FUSD drivers, and even the processes using the devices provided by the errant driver. Figure 3: Throughput comparison of FUSD and in-kernel implementations of /dev/zero. The test timed a read of 1GB of data from each test device on a 2.8 GHz Xeon, for both 2.4 and 2.6 kernels. We tested read() sizes ranging from 64 bytes to 64 Kbytes. Larger read sizes are higher throughput because the cost of a system call is amortized over more data. 3.1.2 FUSD Performance While FUSD has many advantages, the performance of drivers written using FUSD suffers relative to an in-kernel implementation. To quantify the costs of FUSD, we compared the performance of FUSD and in-kernel implementations of the /dev/zero device in Linux. To implement /dev/zero using FUSD, we implemented a server with a read() handler that returned a zeroed buffer of the requested length. The in-kernel implementation implemented the same read() handler directly in the kernel. Figure 3 shows the results of our experiment, running on a 2.8 GHz Xeon. The figure shows that for small reads, FUSD is about 17x slower than an in-kernel implementation, while for long reads, FUSD is only about 3x slower. This reduction in performance is a combination of two independent sources of overhead. The first source of overhead is the additional system call overhead and scheduling latency incurred when FUSD proxies the client’s system call out to the user-space server. For each read() call by a client process, the user-space server first is scheduled, and then must itself call read() once to retrieve the marshalled system call, and must call writev() once to return the response with the filled data buffer. This additional per-call latency dominates for small data transfers. The second source of overhead is an additional data copy. Where the native implementation only copies the response data back to the client, FUSD copies the response data twice: once to copy it from the user-space server, and again to copy it back to the client. This cost dominates for large data transfers. In our experiments, we tested both the 2.6 and 2.4 kernels, and found that 2.6 kernels yielded an improvement for smaller transfer sizes. The 2.6 kernel has a more signifi- 3.2 Device Patterns Using FUSD, it is possible to implement character devices with almost arbitrary semantics. FUSD itself does not enforce any restrictions on the semantics of system calls, other than those needed to maintain fault isolation between the client, server, and kernel. While this absence of restriction makes FUSD a very powerful tool, we have found that in practice the interface needs of most applications fall into well-defined classes, which we term Device Patterns. Device Patterns factor out the device semantics common to a class of interfaces, while leaving the rest to be customized in the implementation of the service. The EmStar device patterns are implemented by libraries that hook into the GLib event framework. The libraries encapsulate the detailed interface to FUSD, leaving the service to provide the configuration parameters and callback functions that tailor the semantics of the device to fit the application. For example, while the Status Device library defines the mechanism of handling each read(), it calls back to the application to represent its current “status” as data. Relative to other approaches such as log files and status files, a key property of EmStar device patterns is their active nature. For example, the Logring Device pattern creates a device that appears to be a regular log file, but always contains only the most recent log messages, followed by a stream of new messages as they arrive. The Status Device pattern appears to be a file that always contains the most recent state of the service providing it. However, most status devices also support poll()-based notification of changes to the state. The following sections will describe the Device Patterns defined within EmStar. Most of these patterns were discovered during the development of services that needed them and later factored out into libraries. In some cases, several similar instances were discovered, and the various features amalgamated into a single pattern. 3.2.1 Status Device The Status Device pattern provides a device that reports the current state of a module. The exact semantics of “state” and its representation in both human-readable and binary forms are determined by the service. Status Devices are used for many purposes, from the output of a neighbor discovery service to the current configuration and packet transfer statistics for a radio link. Because they are so easy to add, Status Devices are often the most convenient way to instrument a program for debugging purposes, such as the output of the Neighbors service and the packet reception statistics for links. Status Devices support both human-readable and binary representations through two independent callbacks implemented by the service. Since the devices default to ASCII mode on open(), programs such as cat will read a human-readable representation. Alternatively, a client can put the device into binary mode using a special ioctl() call, after which the device will produce output formatted in service-specific structs. For programmatic use, binary mode is preferable for both convenience and compactness. Status Devices support traditional read-untill-E0F semantics. That is, a status report can be any size, and its end is indicated by a zero-length read. But, in a slight break from traditional POSIX semantics, a client can keep a Status Device open after EOF and use poll() to receive notification when the status changes. When the service triggers notification, each client will see its device become readable and may then read a new status report. This process highlights a key property of the status device: while every new report is guaranteed to be the current state, a client is not guaranteed to see every intermediate state transition. The corollary to this is that if no clients care about the state, no work is done to compute it. Applications that desire queue semantics should use the Packet Device pattern (described in Section 3.2.2). Like many EmStar device patterns, the Status Device supports multiple concurrent clients. Intended to support one-to-many status reporting, this feature has the interesting side effect of increasing system transparency. A new client that opens the device for debugging or monitoring purposes will observe the same sequence of state changes as any other client, effectively snooping on the “traffic” from that service to its clients. The ability to do this interactively is a powerful development and troubleshooting tool. A Status Device can implement an optional write() handler, which can be used to configure client-specific state such as options or filters. For example, a routing protocol that maintained multiple routing trees might expose its routing tables as a status device that was client-configurable to select only one of the trees. ### 3.2.2 Packet Device The Packet Device pattern provides a read/write device that provides a queued multi-client packet interface. This pattern is generally intended for packet data, such as the interface to a radio, a fragmentation service, or a routing service, but it is also convenient for many other interfaces where queue semantics are desired. Reads and writes to a Packet Device must transfer a complete packet in each system call. If read() is not supplied with a large enough buffer to contain the packet, the packet will be truncated. A Packet Device may be used in either a blocking or poll()-driven mode. In poll(), readable means there is at least one packet in its input queue, and writable means that a previously filled queue has dropped below half full. Packet Device supports per-client input and output queues with client-configurable lengths. When at least one client’s output queue contains data, the Packet Device processes the client queues serially in round-robin order, and presents the server with one packet at a time. This supports the common case of servers that are controlling access to a rate-limited serial channel. To deliver a packet to clients, the server must call into the Packet Device library. Packets can be delivered to individual clients, but the common case is to deliver the packet to all clients, subject to a client-specified filter. This method enhances the transparency of the system by enabling a “promiscuous” client to see all traffic passing through the device. ### 3.2.3 Command Device The Command Device pattern provides an interface similar to the writable entries in the Linux /proc filesystem, which enable user processes to modify configurations and trigger actions. In response to a write(), the provider of the device processes and executes the command, and indicates any problem with the command by returning an error code. Command Device does not support any form of delayed or asynchronous return to the client. While Command Devices can accept arbitrary binary data, they typically parse a simple ASCII command format. Using ASCII enables interactivity from the shell and often makes client code more readable. Using a binary structure might be slightly more efficient, but performance is not a concern for low-rate configuration changes. The Command Device pattern also includes a read() handler, which is typically used to report “usage” information. Thus, an interactive user can get a command summary using cat and then issue the command using echo. Alternatively, the Command Device may report state information in response to a read. This behavior would be more in keeping with the style used in the /proc filesystem, and is explicitly implemented in a specialization of Command Device called the Options Device pattern. ### 3.2.4 Query Device The Device Patterns we have covered up to now provide useful semantics, but none of them really provides the semantics of RPC. To address this, the Query Device pattern implements a transactional, request/response semantics. To ex- --- <table> <thead> <tr> <th>Server</th> <th>Client1</th> <th>Client2</th> <th>Client3</th> </tr> </thead> <tbody> <tr> <td>Status Device</td> <td>write</td> <td>status_notify()</td> <td>binary printable</td> </tr> <tr> <td>Packet Device</td> <td>read</td> <td>pd_unblock()</td> <td>pd_receive()</td> </tr> </tbody> </table> Figure 4: Block diagram of the (a) Status and (b) Packet Device patterns. In the Packet Device diagram, the “F” boxes are client-configurable filters, and the curved arrows from Client1 represent ioctl() based configuration of queue lengths and message filtering. Trapezoid boxes represent multiplexing of clients. push(). These are stored in the ring buffer (RB), and streamed to a client to "lock" the device and perform several back-to-back operations. The lock will be broken in a robin order. However, some applications need to allow a client to hold the lock, with an optional timeout. The lock is used to issue configuration commands, for example to set the channel, sleep state, etc. The set of valid commands and status information is available from the device library, which it then makes available to the client. In addition to passing data through, they proxy and execute a transaction, a client first opens the device and writes the request data. Then, the client uses poll() to wait for the file to become readable, and reads back the response in the same way as reading a Status Device. For those services that provide human-readable interfaces, we use a universal client called echocat that performs these steps and reports the output. It is interesting to note that the Query Device was not one of the first device types implemented; rather, most configuration interfaces in EmStar have been implemented by separate Status and Command devices. In practice, any given configurable service will have many clients that need to be apprised of its current configuration, independent of whether they need to change the configuration. This is exacerbated by the high level of dynamics in sensor network applications. Furthermore, to build more robust systems we often use soft-state to store configurations. The current configuration is periodically read and then modified if necessary. The asynchronous Command/Status approach achieves these objectives while addressing a wide range of potential faults. To the service implementing a Query Device, this pattern offers a simple, transaction-oriented interface. The service defines a callback to handle new transactions. Queries from the client are queued and are passed serially to the transaction processing callback, similar to the way the output queues are handled in a Packet Device. If the transaction is not complete when the callback returns, it can be completed asynchronously. At the time of completion, a response is reported to the device library, which it then makes available to the client. The service may also optionally provide a callback to provide usage information, in the event that the client reads the device before any query has been submitted. Clients of a Query Device are normally serviced in round-robin order. However, some applications need to allow a client to "lock" the device and perform several back-to-back transactions. The service may choose to give a current client the “lock”, with an optional timeout. The lock will be broken if the timeout expires, or if the client with the lock closes its file descriptor. ### 3.3 Domain-Specific Interfaces In Section 3.2 we described several device patterns, generally useful primitives that can be applied to a wide variety of purposes. In this section, we will describe a few examples of more domain-specific interfaces, that are composed from device patterns, but are designed to support the implementation of specific types of services. #### 3.3.1 Data Link Interface The Data Link interface is a specification of a standard interface for network stack modules. The Data Link interface is composed of three device files: data, command, and status. These three interfaces appear together in a directory named for the specific stack module. The data device is a Packet Device interface that is used to exchange packets with the network. All packets transmitted on this interface begin with a standard link header that specifies common fields. This link header masks certain cosmetic differences in the actual over-the-air headers used by different MAC layers, such as the Berkeley MAC [17] and SMAC [22] layers supported on Mica Motes. The command and status devices provide asynchronous access to the configuration of a stack module. The status device reports the current configuration of the module (such as its channel, sleep state, link address, etc.) as well as the latest packet transfer and error statistics. The command device is used to issue configuration commands, for example to set the channel, sleep state, etc. The set of valid commands and the set of values reported in status varies with the underlying capabilities of the hardware. However, the binary format of the status output is standard across all modules (currently, the union of all features). Several “link drivers” have been implemented in EmStar, to provide interfaces to radio link hardware including 802.11, and several flavors of the Mica Mote. The 802.11 driver overlays the socket interface, sending and receiving packets through the Linux network stack. Two versions of the Mote driver exist, one that supports the standard Berkeley MAC and one that supports SMAC. Because all of these drivers conform to the link interface spec, some applications can work more or less transparently over different physical radio hardware. In the event that an application needs information about the radio layer (e.g. the nominal link capacity), that information is available from the status device. In addition to providing support for multiple underlying radio types, the standard Data Link interface enables a variety of useful “pass-through” stack modules and routing modules. Two standard modules in EmStar network stacks are LinkStats and Fragmentation. Both of these sit between a client and an underlying radio driver module, transparently to the client. In addition to passing data through, they proxy and Figure 6: Measurements of the EmStar stack on a 700 MHz Pentium III running the 2.4.20 kernel. The throughput graph shows the performance of a single process sending at maximum rate over a 100Mbit Ethernet, as a function of packet length, through different EmStar stacks. The solid curve represents link saturation, while the other curves compare the performance of sending directly to a socket with that of sending through additional layers. The error bars are 95% confidence intervals. The latency graph shows the average round-trip delay of a ping message over the loopback interface, as a function of packet length, through different EmStar stacks. Both graphs show that performance is dominated by per-packet overhead rather than data transfer, consistent with previous results about FUSD. modify status information, for example updating the MTU specification. 3.3.2 Cost Analysis of the Data Link Interface Our discussion up to this point has yet to address the cost of this architecture. In order to quantify some of these costs, we performed a series of experiments, the results of which are shown in Figure 6. We found that while our architecture introduces a measurable increase in latency and decrease in throughput relative to a highly integrated and optimized solution, these costs have a negligible impact when applied to a low bandwidth communications channel. This is an important case, since EmStar is intended for WSN applications which typically are designed to have a high ratio of CPU to communication. To assess the costs of EmStar, we measured the costs incurred by layering additional modules over an EmStar link device. The udp-raw curves represent a non-EmStar benchmark, in which we used a UDP socket directly. The udp-dev curves represent a minimal EmStar configuration, in which we used the EmStar UDP Link device. For a two-layer stack, we added the EmStar LinkStats module, represented by the +linkstats curves. For a three-layer stack, we added a Fragmentation module over LinkStats, shown by the +frag curves. Our first experiment characterized the cost of EmStar in terms of throughput. In Figure 6(a), our test application sent UDP packets as quickly as possible over a 100Mbit Ethernet channel. We ran this application over our four configurations, comparing direct sends to a socket with three EmStar configurations. For each configuration, the time required to send 1000 packets was measured, and the results of 10 such trials were averaged. The graph shows that per-packet overhead prevents the application from saturating the link until larger packet sizes sufficiently amortize the per-packet costs. Per-packet costs include scheduling latency and system call overhead, while message-passing across the user-kernel boundary results in additional per-byte costs. Our second experiment characterized the cost of EmStar in terms of latency. In Figure 6(b), our test application sent UDP “ping” packets over the loopback interface to a ping replier on the same machine. We measured the round-trip times for 1000 packets and averaged them to estimate the latency for our four configurations. Since the latency over loopback is negligible (shown in the “udp-raw” curve), all of the measured latency represents EmStar overhead. In each case, a ping round trip traverses the stack four times, thus is approximately 4x the latency of a single traversal. The data show that crossing an EmStar interface costs about 66 microseconds on this architecture, without a strong dependence on the length of the message being passed. While these experiments show definite costs to the EmStar architecture, these costs are less critical for WSN applications where communications channels have lower bandwidths and higher latency relative to the rate of local processing. For example, many of our applications use a Mote as a radio interface, which has a maximum bandwidth of about 19.2Kbit/sec and incurs a latency of 125 milliseconds to transmit a 200 byte packet over serial to the Mote and then over the channel. Given this type of interface, the additional latency and bandwidth costs of EmStar are negligible. 3.3.3 Sensor Device Two of the applications that drove the development of EmStar centered around acquisition and processing of audio data. One application, a ranging and localization system [15], extracts and processes audio clips from a specific time in the past. The other, a continuous frog call detection and localization system [8], receives data in a continuous stream. Both applications needed to be able to correlate time series data captured on a distributed set of nodes, thus timing relationships among the nodes needed to be maintained. The Sensor Device interface encapsulates a ring buffer that stores a history of sampled data, and integrates with the EmStar Time Synch service to enable clients to relate local sensor data to sensor data from other nodes. A client of the sensor device can open the device and issue a request for a range of samples. When the sample data is captured, the client is notified and the data is streamed back to the client as it continues to arrive. Keeping a history of recent sensor data and being able to relate the sample timing across the network is critical to many sensor network applications. By retaining a history of sampled data, it is much easier to implement applications where an event detected on one node triggers further investigation and sensing at other nodes. Without local buffering, the variance in multi-hop communications times makes it difficult to abstract the triggered application from the communications stack. ### 3.4 EmStar Events and Client APIs One of the benefits of the EmStar design is that services and applications are separate processes and communicate through POSIX system calls. As such, EmStar clients and applications can be implemented in a wide variety of languages and styles. However, a large part of the convenience of EmStar as a development environment comes from a set of helper libraries that improve the elegance and simplicity of building robust applications. In Section 3.2 we noted that an important part of device patterns is the library that implements them on the service side. Most device patterns also include a client-side “API” library, that provides basic utility functions, GLib compatible notification interfaces, and a crashproofing feature intended to prevent cascading failures. Crashproofing is intended to prevent the failure of a lower-level service from causing exceptions in clients that would lead them to abort. It achieves this by encapsulating the mechanism required to open and configure the device, and automatically triggering that mechanism to re-open the device whenever it closes unexpectedly. A client’s use of crashproof devices is completely transparent. The client constructs a structure specifying the device name, a handler callback, and the client configuration, including desired queue lengths, filters, etc. Then, the client calls a constructor function that opens and configures the device, and starts watching it. In the event of a crash and re-open, the information originally provided by the client will be used to reconfigure the new descriptor. Crashproof client libraries are supplied for both Packet and Status devices. ### 4 Examples The last section enumerated a number of building blocks that are the foundation for the EmStar environment. In this Section, we will describe how we have used them to construct several key EmStar tools and services. #### 4.1 EmSim and EmCee EmSim and EmCee are tools designed to simulate unmodified EmStar systems at varying points on the continuum from simulation to deployment. EmSim is a pure simulation environment, in which many virtual nodes are run in parallel, interacting with a simulated environment and radio channel. EmCee is a slightly modified version of EmSim that provides an interface to real low-power radios in place of a simulated channel. EmSim itself is made up of modules. The main EmSim module maintains a central repository for node information, initially sourced from a configuration file, and exposed as a Status Device. EmSim then launches other modules that are responsible for implementing the simulated “world model” based on the node configuration. After the world is in place, EmSim begins the simulation, starting up and shutting down virtual nodes at the appropriate times. #### 4.1.1 Running Virtual Nodes The uniform use of the /dev filesystem for all of our I/O and IPC leads to a very elegant mechanism for transparency between simulation, various levels of reality, and real deployments. The mechanism relies on name mangling to cause all references to /dev/* to be redirected deeper into the hierarchy, to /dev/sim/groupX/nodeY/* . This is achieved through two simple conventions. First, all EmStar modules must include the call to misc_init() early in their main() function. This function checks for certain environment variables to determine whether the module is running in “simulation mode”, and what its group and node IDs are. The second convention is to wrap every instance of a device file name with sim_path(). This macro will perform name-mangling based on the information discovered in misc_init(). For simplicity, we typically include the sim_path() wrapper at the definition of device names in interface header files. This approach enables easy and transparent simulation of many nodes on the same machine. This is not the case for many other network software implementations. Whenever the system being developed relies on mechanisms inside the kernel that can’t readily be partitioned into virtual machines, it will be difficult to implement a transparent simulation. For example, ad-hoc routing code that directly configures the network interfaces and kernel routing table is very difficult to simulate transparently. While a simulation environment such as ns-2 [27] does attempt to run much of the same algorithmic code as the real system, it does so in a very intrusive, #ifdef-heavy way. This makes it cumbersome to keep the live system in sync with the ns-2 version. In contrast, EmStar modules don’t even need to be recompiled to switch from simulation to reality, and the EmStar device hierarchy provides transparency into the workings of each simulated EmStar node. However, this flexibility comes at a cost in performance. An ad-hoc routing algorithm that dragged every packet to user-space would likely suffer poorer performance. 4.1.2 Simulated World Models The capability to transparently redirect EmStar IPC channels enables us to provide a world for the simulated nodes to see, and in some cases, affect. There are many examples of network simulation environments in the networking community, some of which support radio channel modeling [27][28]. In addition, the robotics community has devoted much effort to creating world models [12]. For sensor networks, the robotic simulations are often more appropriate, because they are designed to model a system sensing the environment, and intended to test and debug control systems and behaviors that must be reactive and resilient. The existence of EmStar device patterns simplifies the construction of simulated devices, because all of the complexity of the interface behavior can be reused. Even more important, by using the same libraries, the chances of subtle behavior differences are reduced. Typically, a “simulation module” reads the node configuration from EmSim’s Status Device and then exposes perhaps hundreds of devices, one for each node. Requests to each exposed device are processed according to a simulation of the effects of the environment, or in some cases in accordance with traces of real data. The notification channel in EmStar status devices enables EmSim to easily support configurations changes during a simulation. Updates to the central node configuration—such as changes in the position of nodes—trigger notification in the simulation modules. The modules can then read the new configuration and update their models appropriately. In addition, we can close the loop by creating a simulation module that provides an actuation interface—for example enabling the node to move itself. In response to a request to move, this module could issue a command to EmSim to update that node’s position and notify all clients. 4.1.3 Using Real Channels in the Lab EmCee is a variant of EmSim that integrates a set of virtual nodes to a set of real radio interfaces, positioned out in the world. We have two EmCee-compatible testbeds: the ceiling array and the portable array. The ceiling array is composed of 55 Crossbow Mica1 Motes, permanently attached to the ceiling of our lab on a 4 foot grid. Serial cabling runs back to two 32-port ethernet to serial multiplexers. The portable array is composed of 16 Crossbow Mica2 Motes and a 16-port serial multiplexer, that can be taken out to the field [6]. The serial multiplexers are configured so that their serial ports appear to be normal serial devices on a Linux server (or laptop in the portable case). To support EmCee, the HostMote and MoteNIC services support an “EmCee mode” where they open a set of serial ports specified in a config file and expose their devices within the appropriate virtual node spaces. Thus, the difference between EmSim and EmCee is minimal. Where EmSim would start up a radio channel simulator to provide virtual radio link devices, EmCee starts up the MoteNIC service in “EmCee mode”, which creates real radio link devices that map to multiplexer serial ports and thus to real Motes. Our experience with EmCee has shown it is well worth the infrastructure investment. Users have consistently observed that using real radios is substantially different from our best efforts at creating a modeled radio channel [2][6]. Even channels driven by empirical data captured using the ceiling array don’t seem to adequately capture the real dynamics. Although testing with EmCee is still not the same as a real deployment, the reduction in effort relative to a deployment far outweighs the reduction in reality for a large part of the development and testing process. 4.1.4 Performance of EmSim/EmCee Currently, an important limitation of our simulator is that it can only run in real-time, using real timers and interrupts from the underlying operating system. In contrast, a discrete-event simulator such as ns-2 runs in its own virtual time, and therefore can run for as long as necessary to complete the simulation without affecting the results. Discrete-event simulations can also be made completely deterministic, allowing the developer to more easily reproduce an intermittent bug. The real-time nature of EmSim/EmCee makes performance an important consideration. With perfect efficiency, the simulator platform would need the aggregate computational power of all simulated nodes. In reality, extra head-room is needed for nonlinear costs of running many processes on a single computer. To test the actual efficiency, we ran test simulations on two SMP-enabled servers. One had 4 700MHz Pentium-III processors, running Linux kernel 2.4.20. The other had 2 2.8GHz Xeon processors, with hyperthreading disabled, running Linux 2.6.3. We tested both kernels because Linux 2.6 has a “O(1) scheduler”—i.e., the 2.6 scheduler performs constant work per context switch regardless of run-queue size. 2.6 kernels also have much finer-grained locking, thus better kernel parallelism. The FUSD kernel module also has fine-grained locking. In our initial testing, the default Linux scheduler was used; no explicit assignment of processes to CPUs was made. Each “node” consisted of two processes that exchanged data at maximum possible rate via an EmStar Status Device. The results are in Figure 7. We draw several conclusions from the data. First, the Linux 2.6 scheduler does seem to be a win. Even with differences in CPU speed factored out, it supported much larger simulations than the 2.4 scheduler (512 vs. 128 nodes). In addition, it supported better parallelism: Linux 2.6 with 2 CPUs had, on average, 1.7 times more throughput than for a single CPU, compared to 1.5 times for Linux 2.4. However, Linux 2.6 simulations suffered much higher jitter, i.e. differences in performance from node to node. The cause of this unfairness is still under investigation. The data also emphasize the high cost of FUSD inter-process communication across processes not running on the same CPU. This can be seen in that a single-node (2-process) simulation ran on a single-CPU platform at nearly at nearly twice the speed as on 2-, 3- or 4-CPU platform. The Linux scheduler, by default, places the Status Device Client and Server processes on separate CPUs if available. In applications that have a very high communication-to-computation ratio, as in our test workload, the overhead of extra CPUs is a much higher cost than the benefit of extra cycles. However, many EmStar applications (and WSN applications in general) strive to do as much computation as possible per unit of communication, making these limitations of SMP model a virtual non-issue in “real” simulations. 4.2 EmRun EmRun starts up, maintains, and shuts down an EmStar system according to the policy specified in a config file. There are three key points in its design: process respawn, in-memory logging, and fast startup, graceful shutdown. Respawn Process respawn is neither new, nor difficult to achieve, but it is very important to an EmStar system. It is difficult to track down every bug, especially ones that occur very infrequently, such as a floating-point error processing an unusual set of data. Nonetheless, in a deployment, even infrequent crashes are still a problem. Often, process respawn is sufficient to work around the problem; eventually, the system will recover. EmStar’s process respawn is unique because it happens in the context of “Crashproofed” interfaces (Section 3.4). When an EmStar process crashes and restarts, Crashproofing prevents a ripple effect, and the system operates correctly when the process is respawned. In-Memory Logs EmRun saves each process’ output to in-memory log rings that are available interactively from the /dev/emlog/* hierarchy. Unlike rotating logs, EmStar log rings never need to be switched, never grow beyond a maximum size, and always contain only recent data. Fast Startup EmRun’s fast startup and graceful shutdown is critical for a system that needs to duty cycle to conserve energy. The implementation depends on a control channel that EmStar services establish back to EmRun when they start up. EmStar services notify EmRun when their initialization is complete, signaling that they are now ready to respond to requests. The emrun_init() library function, called by the service, communicates with EmRun by writing a message to /dev/emrun/.int/control. EmRun then launches other processes waiting for that service, based on a configured dependency graph. This feedback enables EmRun to start independent processes with maximal parallelism, and to wait exactly as long as it needs to wait before starting dependent processes. This scheme is far superior to the naive approach of waiting between daemon starts for pre-determined times, i.e., the ubiquitous “sleep 2” statements found in *NIX boot scripts. Various factors can make startup times difficult to predict and high in variance, such as flash filesystem garbage collection. On each boot, a static sleep value will either be too long, causing slow startup, or too short, causing services to fail when their prerequisites are not yet available. Graceful Shutdown The control channel is also critical to supporting graceful shutdown. EmRun can send a message through that channel, requesting that the service shut down, saving state if needed. EmRun then waits for SIGCHLD to indicate that the service has terminated. If the process is unresponsive, it will be killed by a signal. An interesting property of the EmRun control channel is one that differentiates FUSD from other approaches. When proxying system calls to a service, FUSD includes the PID, UID, and GID of the client along with the marshalled system call. This means that EmRun can implicitly match up the client connections on the control channel to the child processes it has spawned, and reject connections from non-child processes. This property is not yet used much in EmStar but it provides an interesting vector for customizing device behavior. 4.3 Time-Synchronized Sampling in EmStar Several of the driving applications for EmStar have involved distributed processing of high-rate audio: audible acoustic ranging, acoustic beamforming, and animal call detection are a few of the applications. We used earlier versions of EmStar to tackle a few of these problems [10][15][8]. Referring back to the animal call localization application of Figure 1, we see how the “syncd” and “audiod” services collaborate so that “collab_detect” can correlate events detected on nodes across the network. In this section, we will describe these services in more detail. **TimeSync Between Nodes** The TimeSync service uses Reference Broadcast Synchronization (RBS) [3] to compute relationships among the CPU clocks on nodes in a given broadcast domain. This technique correlates the arrival times of broadcast packets at different nodes and uses linear regression to estimate conversion parameters among clocks that receive broadcasts in common. We chose RBS because techniques based on measuring send times, such as TPSN [14], are not generally applicable without support at the MAC layer. Requiring this support would rule out many possible radios, including 802.11 cards. A key insight in RBS is that it is better to enable conversion than to attempt to train a clock to follow some remote “master” clock. Training a clock has many negative repercussions for the design of a sampling system caused by clock discontinuities and distortions. Thus, TimeSync is really a “time conversion” service. The output of the regression is reported through the /dev/sync/params/ticker device, in a complete listing of all known pairwise conversions. Clients of TimeSync read this device to get the latest conversion parameters, then convert times from one timebase to another. The code for reading the device and converting among clocks is implemented in a library. **TimeSync within a Node** Many systems have more than one clock. For example, a Stargate board, with an attached Mote and an audio card has three independent clocks. Thus to compare audio time series from two independent nodes, an index in a time series must be converted first to local CPU time, then to remote CPU time, and finally to a remote audio sample index. The TimeSync service provides an interface for other services to supply pair-wise observations to it, i.e. a CPU timestamp and a clock-X timestamp. This interface uses a Directory device to enable clients to create a new clock, and associate it with a numeric identifier. The client then writes periodic observations of that clock to the timesync command device /dev/sync/params/command. The observations are fit using linear regression to compute a relationship between the two local clocks. **The Audio Server** The Audio service provides a Sensor Device output. It defines a “sample clock”, which is the index of samples in a stream, and submits observations relating the sample clock to the CPU time to TimeSync. A client of the Audio service can extract a sequence of data from a specific time period by first using TimeSync to convert the begin and end times to sample indices and then placing a request to the Audio service for that sample range. Conversely, a feature detected in the streaming output at a particular sample offset can be converted to a CPU time. These clock relations can also be used to compute and correct the skew in sample rates between devices, which can otherwise cause significant problems. Generating the synch observations requires minor changes to the audio driver in the kernel. We have made patches for two audio drivers: the iPAQ built-in audio driver and the Crystal cs4281. In both cases, incoming DMA interrupts are timestamped and retrieved by the Audio service via ioctl(). While this approach makes the system harder to port to new platforms and hardware, it is a better solution for building sensing platforms. The more common solution, the “synchronized start” feature of many sound cards, has numerous drawbacks. First, it only gives you one data point for the run, where our technique gives you a continuous stream of points to average. Second, it is subject to drift, and since the end is not timestamped there is no way to accurately determine the actual sample rate. Third, it forces the system to coordinate use of the audio hardware, whereas the Audio server runs continuously and allows access by multiple clients. 5 Design Philosophy and Aesthetics In this section, we will describe some of the ideas behind the choices we made in the design of EmStar. These choices were motivated by the issues faced by WSNs, which have much in common with traditional distributed systems. 5.1 No Local/Remote Transparency One of the disadvantages of FUSD relative to sockets is that connections to FUSD services are always local, whereas sockets provide transparency between local and remote connections. Nonetheless, we elected to base EmStar on FUSD. because we felt that the advantages outweighed the disadvantages. The primary reason for giving up remote transparency in EmStar is that remote access is rarely transparent in WSNs. Communications links in WSNs are characterized by high or variable latency, varying link quality, evolving topologies, and generally low bandwidth. In addition, the energy cost of communication in WSNs motivates innovative protocols that minimize communications, make use of broadcast channels, tolerate high latency, and make tradeoffs explicit to the system designer. Remote communication in WSNs is demonstrably different than local communication, and very little is achieved by masking that fact. In abandoning remote transparency, the client gains the benefit of knowing that each synchronous call will be received and processed by the server with low latency. While an improperly implemented server can introduce delays, there is never a need to worry that the network might introduce unexpected delay. Requests that are known to be time consuming can be explicitly implemented so that the results are returned asynchronously via notification (e.g. Query Device). 5.2 Intra-Node Fault Tolerance Tolerance of node and communications failures is important to the design of all distributed systems. In WSNs, node robustness takes on an even greater importance. First, the cost of replacing or repairing embedded nodes can be much higher, especially when network access to the node is unreliable or a physical journey is required—in extreme cases, nodes may be physically irretrievable. Second, many scientific applications of WSNs intend to discover new properties of their environment, which may expose the system to new inputs and exercise new bugs. We address fault tolerance within a node in several ways: EmRun respawn, "crashproofing", soft-state refresh, and transactional interface design. We discussed EmRun respawn and crashproofing in Sections 4.2 and 3.4, as a means of keeping the EmStar services running and preventing cascading failures when an underlying service fails. While soft-state and transactional design are standard techniques in distributed systems, in EmStar we apply these techniques to IPC as well. Status devices are typically used in a soft-state mode. Rather than reporting more economical "diffs", every status update reports the complete current state, leaving the client to decide how to respond based on its own state. To limit the damage caused by a missing notification signal, clients periodically request a refresh in the absence of notification. When the aggregate update rate is low it is usually easy to make the case for trading efficiency for robustness and simplicity. Similar considerations hold in the reverse direction. Clients that push state to a service typically use transactional semantics with a soft-state refresh. Rather than allowing the client and server to get out of synch (e.g. in the event of a server restart), the client periodically resubmits its complete state to the service, enabling the service to make appropriate corrections if there is a discrepancy. Where the state in question is very large, there may be reason to implement a more complex scheme, but for small amounts of state, simplicity and robustness carry the day. While trading off efficiency for robustness may not be the right approach for all applications and hardware platforms, it has worked well for the applications we have built. 5.3 Code Reuse Code reuse and modularity were major design goals of EmStar. EmStar achieves reusability through disciplined design, driven by factoring useful components from existing implementations. For example, each device pattern was originally implemented as a part of several different services, and then factored out into a unified solution to a class of problems. Table 1 shows a quantitative picture of reuse. The design of EmStar services has followed the dictum "encapsulate mechanism, not policy". This approach encourages reuse, and reduces system complexity while maintaining simple interfaces between modules. EmStar implements modules as independent processes rather than as libraries, eliminating a wide variety of unanticipated interactions, thus better controlling complexity as the number of modules increases. <table> <thead> <tr> <th>Building Block</th> <th>Server Uses</th> <th>Client Uses</th> </tr> </thead> <tbody> <tr> <td>Status Device and derivatives</td> <td>40</td> <td>22</td> </tr> <tr> <td>Command Device</td> <td>17</td> <td>N/A</td> </tr> <tr> <td>Packet Device</td> <td>10</td> <td>5</td> </tr> <tr> <td>Data Link Interface</td> <td>12</td> <td>32</td> </tr> </tbody> </table> Table 1: Reuse statistics culled from LXR. 5.4 Reactivity Reactivity is one of the most interesting characteristics of WSNs. They must react to hard-to-predict changes in their environment in order to operate as designed. Often the tasks themselves require a reaction, for example a distributed control system or a distributed sensing application that triggers other sensing activities. EmStar supports reactivity through notification interfaces in EmStar devices. Most EmStar services and applications are written in an event-driven style that lends itself to reactive design. 5.5 High Visibility While the decision to stress visibility in the EmStar design was partly motivated by aesthetics, it has paid off handsomely in overall ease of use, development, and debugging. The ability to browse the IPC interfaces in the shell, to see human-readable outputs of internal state, and in many cases to manually trigger actions makes for very convenient development of a system that could otherwise be quite cumbersome. Tools like EmView also benefit greatly from stack transparency, be- cause EmView can snoop on traffic travelling in the middle of the stack in real time, without modifying the stack itself. 6 Related Work In addition to related work we mentioned throughout this paper, in this section we highlight the most related systems. The closest system to EmStar is TinyOS [17]. TinyOS addresses the same problem space, only geared to the much smaller Mote platform. As such, much TinyOS development effort must focus on reducing memory and CPU usage. By operating with fewer constraints, EmStar can focus on more complex applications and on improving robustness in the face of growing complexity. A key attribute of TinyOS that EmStar lacks is the capacity to perform system-wide compile time optimizations. Because EmStar supports forms of dynamic binding that do not exist in TinyOS, many compile-time optimizations are ruled out. Click [19] is a modular software system designed to support implementation of routers. While Click is designed for a different application space, there are many similarities, including an emphasis on modularity. A key difference is that like TinyOS, Click leverages language properties and static configuration to perform global optimizations. EmStar instead supports dynamic configuration and provides greater levels of fault isolation between modules. Player/Stage [12] is a software system designed for robotics that supports “real-code” simulation. Player is based on sockets protocols, which have the advantage of remote transparency but are not browseable. 7 Conclusion and Future Work We have found EmStar to be a very useful development environment for WSNs. We use EmStar at CENS in several current development efforts, including a 50-node seismic deployment and the ESS microclimate sensing system. We also support other groups using EmStar, including the NIMS [13] robotic ecology project and ISI ILENSE. Our current platform focus is the Crossbow/Intel Stargate platform, an inexpensive Linux platform based on the XScale processor. Stargates are much easier to customize than other COTS platforms such as iPAQs. We plan several extensions to EmStar, including: better integration between Motes and Microsystems based on a TinyOS “VM”, virtualization of EmSim’s clock to enable “simulation pause” and larger simulations, remote device access over local networks via sockets, and efficient support for high-bandwidth sensor interfaces such as audio, image data, and DSPs using a shared-memory data channel. Acknowledgements This work was made possible with support from The Center for Embedded Networked Sensing (CENS) under the NSF Cooperative Agreement CCR-0120778, and the UC MICRO program (grant 01-031) with matching funds from Intel Corp. Additional support from the DARPA NEST program (the “GALORE” project, grant F33615-01-C-1906). References
{"Source-Url": "https://cloudfront.escholarship.org/dist/prd/content/qt9tv972pr/qt9tv972pr.pdf", "len_cl100k_base": 13911, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47610, "total-output-tokens": 15831, "length": "2e13", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.0005402565002441406, "__label__crime_law": 0.00032591819763183594, "__label__education_jobs": 0.000797271728515625, "__label__entertainment": 0.00010097026824951172, "__label__fashion_beauty": 0.00018680095672607425, "__label__finance_business": 0.0003139972686767578, "__label__food_dining": 0.0003819465637207031, "__label__games": 0.0007581710815429688, "__label__hardware": 0.005523681640625, "__label__health": 0.0005345344543457031, "__label__history": 0.0004880428314208984, "__label__home_hobbies": 0.00014853477478027344, "__label__industrial": 0.0009775161743164062, "__label__literature": 0.00021779537200927737, "__label__politics": 0.00029277801513671875, "__label__religion": 0.0005412101745605469, "__label__science_tech": 0.259033203125, "__label__social_life": 8.374452590942383e-05, "__label__software": 0.0173492431640625, "__label__software_dev": 0.70947265625, "__label__sports_fitness": 0.0003638267517089844, "__label__transportation": 0.0009660720825195312, "__label__travel": 0.00025081634521484375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73120, 0.02798]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73120, 0.40816]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73120, 0.91503]], "google_gemma-3-12b-it_contains_pii": [[0, 243, false], [243, 4861, null], [4861, 8030, null], [8030, 13183, null], [13183, 19103, null], [19103, 24039, null], [24039, 29658, null], [29658, 35278, null], [35278, 39681, null], [39681, 45399, null], [45399, 51304, null], [51304, 55145, null], [55145, 60811, null], [60811, 66576, null], [66576, 73120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 243, true], [243, 4861, null], [4861, 8030, null], [8030, 13183, null], [13183, 19103, null], [19103, 24039, null], [24039, 29658, null], [29658, 35278, null], [35278, 39681, null], [39681, 45399, null], [45399, 51304, null], [51304, 55145, null], [55145, 60811, null], [60811, 66576, null], [66576, 73120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73120, null]], "pdf_page_numbers": [[0, 243, 1], [243, 4861, 2], [4861, 8030, 3], [8030, 13183, 4], [13183, 19103, 5], [19103, 24039, 6], [24039, 29658, 7], [29658, 35278, 8], [35278, 39681, 9], [39681, 45399, 10], [45399, 51304, 11], [51304, 55145, 12], [55145, 60811, 13], [60811, 66576, 14], [66576, 73120, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73120, 0.03906]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
0302fc6972fa7ca6e7b35e1fcc37720411dcefbe
Getting started: performing basic operations on Beagle2 - Basics about the system - Basics about programming environment - Modules and Programming Environment (PrgEnv) - How to work on the filesystem - Description of the filesystem - HIPAA - Lustre - Useful commands on lustre - Striping - Useful commands for striping - How to move data to and from Beagle - How to submit jobs - Projects - Basics about job submission on Beagle2 - Job Submission Best Practices - Batch jobs - Commands for submitting and inquiring about jobs - PBS (batch) scripts - Aprun - Memory usage - Running Swift on Beagle2 - Additional resources: - In case you need help/support Note: All policies and approaches are subject to changes. While we will do our best to keep users informed of such changes, it is not always possible to do so. Basics about programming environment The operating system on Beagle2 is the native Cray Linux Environment (CLE) On login nodes: Is very similar to a conventional Linux environment. On compute nodes is available as: - CLE Static (which only allows the utilization of statically linked software, and it is the basic OS used for large simulations in the "Extreme Scalability Mode" (ESM)) - CLE with Dynamic Shared Objects and Libraries (DSL) — see How to develop/port programs for/to Beagle **xtnodestat** command shows: - Current configuration of Beagle2’s nodes: which blades are compute which are service and where they are located in the machine. - It will also provide information about the current workload of the machine. Type **man xtnodestat** for more details. Please note that: free nodes as seen through xtnodestat does not always mean they are available for your use. Modules and Programming Environment (PrgEnv) Programming environments support the creation, modification, execution and debugging of programs. Programming Environments available on Beagle2 are: Cray Programming Environment and the GNU programming environment. The programming environment is managed by the **module** command. To learn more about modules see this page. When working with the Cray Linux Environment, you will usually have to load a "module", see Environment User’s Guide Module is a "package" on a Cray system that enables you to dynamically modify the user environment by installing or uninstalling "modulefiles". Module contains commands to configure the shell environment for a particular compiler or library. It allows multiple versions of software to be installed simultaneously; the user can choose which version to use while compiling code or running their jobs. Default compiler on Beagle is PrgEnv-cray, if you want to switch to PrgEnv-gnu: module swap PrgEnv-cray PrgEnv-gnu The module command provides a number of capabilities to the user including: - module load load a module - module unload unload a module - module swap unload a module file and load another (module switch produce the same effect) - module list listing which module files are currently loaded - module avail determining which module files can be loaded; lists all available modules on the system - module use dir to prepend a directory dir to the MODULEPATH environment variable. If you want to add a directory to the list where the module command looks for new modules. - module use --append dir will append the directory to MODULEPATH. - module unused dir will remove directory dir from the MODULEPATH environment variable. Note: in situations when a new compiler has to be utilized --module swap might be a more appropriate strategy. The modules that a user has loaded are persistent as long as you're logged in. To add modules permanently to your environment you can add module commands to a file in your home directory called .modules. For example if you want to always use the GNU programming environment you would add: ams@login1:~> cat ~/.modules module unload PrgEnv-cray module load PrgEnv-gnu How to work on the filesystem Description of the filesystem Beagle now mount the following filesystems: /home: CI home directories (read-only on compute nodes, will soon be removed) - Reliable for small storage of data like source code, shell scripts, etc. - Slow. It is not tuned for high performance parallel jobs. - Should not be used for calculations on Beagle! - 10 GB quotas and they are enforced! - Referenced by the environment variable $HOME /lustre/beagle2: local Lustre filesystem (this is where batch jobs should do most of their I/O) - It’s a parallel distributed file system. - Scratch filesystem. NO BACKUP. - Files in Lustre are subjected to purging. It is the users' responsibility to protect themselves from data loss! - Referenced by the environment variable $LUSTREDIR - 450TB of usable space - While there are currently no restrictions in terms of usage and capacity, these conditions will likely change. - Allows users to control the striping parameters when storing data on the filesystem. Tuning these parameters correctly can lead to better computation performance--see bellow. /soft: local Cray software repository (read-only) NOTE: Home directories are not mounted on the compute nodes (for performance reasons), so you'll always want to be working out of the Lustre scratch filesystem (/lustre/beagle2/<your_user_name>). Make sure to copy everything you’re working on out of your home directory to your Lustre directory and work out of that Lustre directory whenever you're on Beagle /ufs: internal filesystem for ALPS scheduler (read-write) /temp, /var, /opt, /dev and so on are in general read only from any node and usually more restricted from the compute node. NOTE: The CI Systems Group reserves the right to rebuild the Lustre filesystem at any time. While best efforts are always made to recover data, the primary focus will be to return the filesystem to availability as quickly as possible. Advance notice will be given as early as possible. Aside from unexpected disaster recovery, all attempts will be made to limit outages to necessary maintenance, reconfiguration, and reliability testing. If you encounter undesirable behavior with the Lustre filesystem, please contact beagle-support@ci.uchicago.edu. (It is assumed that the filesystem will need some tuning as its use and activity increase.) Research and HIPAA Privacy Protections Content Authors • Reid Cushman, PhD CITI Program This module is for educational purposes only. It is not designed to provide legal advice or legal guidance. You should consult with your organization's attorneys if you have questions or concerns about the relevant laws and regulations discussed in this module. HIPAA's Regulatory Scope HIPAA's protections focus on “individually identifiable health information,” which HIPAA defines as information in “any form or medium” that “[r]elates to the past, present, or future physical or mental health or condition of an individual; the provision of healthcare to an individual; or the past, present, or future payment for the provision of health care to an individual” (Security and Privacy 2013). HIPAA’s protections reach only a subset of individually identifiable health information — formally called protected health information or simply “PHI” — created in or by what HIPAA calls covered entities. Covered entities include individual healthcare providers, healthcare provider organizations, health plans, and health information clearinghouses that engage in electronic healthcare transactions (see Health and Human Services Covered Entity Decision Charts). HIPAA’s protections for PHI extend to non-U.S. citizens’ data as well. Some identifiable health information used for research originates outside of covered entities, and so may not be covered by HIPAA. However, you must check with your organization’s privacy authorities before assuming your situation falls outside HIPAA’s scope. What Kinds of Users and Uses Are Covered? HIPAA regulations set requirements for use and disclosure of PHI by covered entities, and by extension on all members of a covered entity’s workforce that have contact with PHI. HIPAA’s data protection requirements also apply “in the same manner” to business associates (and by extension to the workforce of such business associates) that perform functions using PHI on a covered entity’s behalf. Researchers may be part of the workforce of a covered entity, or may be covered entities themselves if they are also healthcare providers. If so, they are directly affected by the HIPAA’s research rules. Researchers who meet neither of these conditions are still indirectly affected by HIPAA rules if a covered entity is the source of their data and those data meet the definition of PHI. HIPAA’s rules on use and disclosure are generally “purpose-based” — that is, the intended use sets the rules more than the type of data itself. The research rules discussed here are different than those for, say, treatment or treatment-related payments. (relatively liberal), or for marketing or fundraising (relatively strict). A few types of data, such as psychotherapy notes do receive special protection under HIPAA. State laws also often have many categories of data with special protections, with which you should be familiar (or be in contact with an organizational official who has that knowledge). What Constitutes "Research"? Like the Common Rule, HIPAA defines research as a “systematic investigation, including research development, testing, and evaluation, designed to develop and contribute to generalizable knowledge” (Protection of Human Subjects 2009; Security and Privacy 2013). Note that some kinds of investigative activities that use patient data are excluded in this definition. For example: 1. Quality assessment and improvement, including outcomes evaluation and development of clinical guidelines or protocols, fall under the category of healthcare operations under HIPAA – provided the primary aim is not obtaining generalizable knowledge. 2. Activities that aim primarily for generalizable knowledge of population health can fall into the category of public health activity under HIPAA. The regulations are complex. So, as with the covered entity status, a determination by an organization’s IRB, designated privacy official(s), or legal counsel is usually required to assure that an activity is “not research” and therefore subject to different HIPAA rules. Who Enforces the HIPAA Research Protections? A covered entity may choose to rely on an IRB to assess compliance with both the FDA and Common Rule requirements and HIPAA research requirements. Alternatively, HIPAA provides that covered entities may create a Privacy Board to handle some research-related issues, notably determinations about eligibility for waivers, alterations, and exemptions from authorization processes. A covered entity may also leave some decisions about compliance with the research provisions of HIPAA to its designated privacy officer. It is critical that you understand the allocation of responsibilities at your organization. Research subjects, like patients generally, have recourse to both your organization’s authorities and to federal and state agencies in the event they wish to file complaints about or have questions regarding an organization’s protective efforts. As with any other planned activity related to protected health information, research must be mentioned in a privacy notice that HIPAA requires be provided by covered entities to their patients/customers. The privacy notice must include the ways in which data subjects may register complaints and report problems, either locally or with federal authorities. Every researcher should be familiar with their organization’s privacy notice, particularly the persons or departments it identifies as enforcement authorities for the organization. HIPAA Research-Related Rules If the data in question meet the definition of PHI and are being used for purposes that fall within HIPAA’s definition of research, HIPAA generally requires explicit written authorization (consent) from the data subject for research uses. However, HIPAA allows for research-related access to individuals’ identifiable health data without authorization under certain circumstances: 1. The research involves only minimal risk. 2. The research is used solely for activities preparatory to research. 3. Only deceased individual’s information is used. 4. It is “grandfathered” research where all legal permissions were in place before HIPAA took effect. Data that do not identify individuals can be used for research without specific authorization if: 1. Only fully de-identified data are used. 2. A “limited data set” is used, under an approved “data use agreement.” Each of these conditions is described in the sections below. Waivers of Alterations of Authorization Requirement Due to Minimal Risk An organization’s IRB or Privacy Board (and in some organizations a designated privacy official) may determine that a waiver or alteration of the authorization requirement is appropriate. The conditions are modeled on the criteria for a waiver of informed consent in the Common Rule. Use or disclosure of the PHI must involve no more than minimal risk to the privacy of the research subjects, and include the following elements: - An adequate plan to protect any data identifiers from improper use and disclosure. - An adequate plan to destroy data identifiers at the earliest opportunity consistent with conduct of the research (unless there is a health or research justification for retaining the identifiers, or such retention is otherwise required by law). • Adequate written assurances that the PHI will not be reused or disclosed to any other individual or entity, except as required by law for authorized oversight of the research project, or for other research for which the use or disclosure of PHI would be permitted by HIPAA. • The research could not practicably be conducted without access to and use of the PHI. • The research could not practicably be conducted without the waiver or alteration to the authorization. More about what counts as a data identifier is provided in the sections below on de-identified data and limited data sets. Activities Preparatory to Research; Decedents’ Information Exceptions HIPAA provides for two more exceptions to the authorization requirement for identifiable data: • Where the PHI will be used solely for reviews preparatory to research (for example, for protocol development or identifying potential subjects) and will not leave the covered entity. • Where the PHI refers solely to deceased individuals (the covered entity may ask for documentation of death of all data subjects). In each case, the researcher must make a written or oral representation to the covered entity’s designated officials that such access is necessary for the research purposes -- someone from the IRB, the Privacy Board, or a privacy officer / designee -- who would then determine the appropriateness of the request. Grandfathered Research If all informed consents and other legal permissions required at the time were in place before HIPAA took effect (April 2003 in most cases), and have not changed since, a new HIPAA authorization is not required even for identified data. Obviously, this is no longer a commonly used pathway to bypass authorizations. De-identified Data A researcher may use fully de-identified health data without any authorization from individual data subjects. As the name implies, de-identified information must have all direct and indirect identifiers removed, to eliminate (or at least make highly improbable) re-identification using statistical techniques. De-identified information is no longer considered PHI, because by definition it is no longer individually identifiable. HHS issued its Guidance Regarding Methods for De-identification of Protected Health Information in 2012. This guidance provides a detailed description of alternative methods, and should be considered required reading for anyone contemplating a de-identification strategy. Under the HIPAA regulations, successful de-identification may be based on an “Expert Determination” by an “individual with appropriate knowledge” of statistical techniques who has analyzed the data set and can attest that the risk of re-identification is “very small.” (Very small is not defined in the regulations.) Alternatively, covered entities may use the “Safe Harbor” method of removing 18 types of identifying elements specified in the HIPAA regulations. In either case, the covered entity must have no actual knowledge that re-identification is possible or likely, for example by linking to other known data sets. Limited Data Sets and Data Use Agreements De-identification trades privacy protection for research productivity. Sometimes the trade-off is too steep, and a fully de-identified data set will not meet a research need. As an alternative, a covered entity may disclose PHI in a limited data set (LDS) to a researcher who has entered into an appropriate data use agreement. A LDS must have all direct identifiers removed; however, it may still include information that could “indirectly” identify the subject using statistical methods. That is, the disclosure risk is greater than “very small.” The data use agreement for an LDS must: • Delineate the permitted uses and disclosures of such information by the recipient, consistent with the purposes of research; - Limit the individuals that can use or receive the data; and - Require the recipient to agree not to re-identify the data or contact the individuals. **Minimum Necessary Uses and Disclosures** Uses and disclosures of data for research that are allowed to bypass the authorization requirement are still subject to the **minimum necessary standard** – that is, the uses/disclosures must be no more than the minimum required for the described research purpose. A covered entity may rely on a researcher’s documentation -- or the assessment of an IRB or Privacy Board -- that the information requested is the minimum necessary for the research purpose. By contrast, research information obtained using an authorization is not bound by the minimum necessary standard -- on the theory that the data subject has given explicit permission in accordance with the signed authorization. However, be aware that while HIPAA may not require a minimum necessary justification at all times, an IRB's evaluation of risks and burdens on human research subjects arguably does. **Disclosure Accounting** Individuals whose health information is covered by HIPAA have the right to an “accounting of disclosures” of their PHI. In this context, a “disclosure” occurs when PHI is communicated to an outside individual or entity, including another covered entity. Access within the covered entity – for example, by members of a research team who are all part of the same organization’s workforce -- is considered a “use” not a disclosure. There is no accounting requirement for these internal uses for research. In addition to being limited to external disclosures, disclosure accounting is not required for: - Disclosures made under authority of a consent/authorization, on the theory that individuals are aware of what they have expressly permitted for that research. - Disclosures to the individual directly about him/herself. - Limited data set disclosures subject to a data use agreement. - De-identified information that no longer qualifies as PHI. When an accounting is required, it must include disclosures during the six years prior to the data subject’s request, and include certain types of information depending on the size of the protocol. While HIPAA may not require it, many organizations will require that researchers maintain logs of all disclosures from research data collections as a security measure, including transfers to other individuals within the covered entity. Electronic data storage will increasingly offer this capability cheaply and automatically; older collections will require manual logging. **Characteristics of Authorizations** If a research activity meets none of the bypassing criteria above, an authorization (consent) is required. When they are required, authorizations must be: - In “plain language” so that individuals can understand the information contained in the form, and therefore are able to make an informed decision. - Executed in writing, and signed by the research subject (or an authorized personal representative). Authorizations must include a specific description of the PHI to be used or disclosed, the name(s) or other identification of individuals involved in the research, and description of each purpose of the requested use or disclosure. HIPAA authorizations are normally required to have an explicit expiration date. In the context of research, it is sufficient to specify an expiration “event” -- such as “the end of the study.” A research authorization can also have no expiration date at all, as would be the case for a research database or repository, or other future use, though this absence must be clearly indicated. HIPAA authorizations cannot normally be combined with other types of documents (such as a privacy notice). However, HIPAA research authorizations can be combined with any other legal permission related to the study, including an informed consent that meets Common Rule or FDA regulations or another type of authorization. As with any informed consent document, researchers are strongly urged to rely on standard models rather than creating their own authorization forms, lest they make a critical error in format or content. Most organizations will already have standard documents available; check with your IRB, Privacy Board, or privacy officer. If there are multiple documents that limit information use or disclosure, the most restrictive one applies. Whether in a single instrument or several, the core requirement is to provide enough information for the data subject to make an informed choice. **Revocations of Authorizations** Like other kinds of HIPAA authorizations, those for research may be revoked by the subject at any time, provided that the revocation is in writing. Revocation of an authorization is not valid to the extent that the covered entity has taken actions relying on it, such as in the provision of prior treatment. Such revocations may be limited “as necessary to maintain the integrity of the research study.” Recruiting into Research It is still permissible under HIPAA to discuss recruitment into research with patients for whom such involvement might be appropriate. This common practice is considered to fall within the definition of treatment, at least when the conversation is undertaken by one of the patient's healthcare providers. Remember, however, that a data subject's information cannot generally be disclosed to a third party -- even another care provider -- for a research use without an authorization from the individual or an approved waiver, alteration, or exception to authorization. HHS guidance on HIPAA has affirmed that recruitment efforts can qualify as a "preparatory to research" activity that would allow a researcher to identify potential research participants, and even contact them for purposes of seeking their authorization (HHS 2004). However, such efforts must be approved, and the PHI used for this purpose cannot leave the covered entity during this activity. "Retrospective" Research As electronic health data collections grow in scale and scope it is an increasingly common practice to "browse" them, looking for interesting patterns that could translate into research possibilities. Indeed, bio-repositories of tissue and data created just for this purpose are increasingly common, and the scope and scale of such repositories grow daily. (Retrospective analysis of paper charts hasn't gone away either.) Use or disclosure of PHI for retrospective research studies may be done only with patient authorization -- or with a waiver, alteration, or exception determination from an IRB or Privacy Board. It should not be difficult to meet one of the criteria for the latter for such exploratory efforts. Alternatively, the data collection itself may have been created with an explicit authorization from subjects for future research. However, remember that you generally cannot proceed on your own without some approval from an IRB, Privacy Board, or other designated governing entity. Security Rule Efforts to meet the Common Rule, FDA, and HIPAA regulations' privacy requirements are only part of the researcher's task. HIPAA also has a Security Rule that complements its Privacy Rule. The Security Rule requires that PHI collections receive appropriate information security protections for as long as they exist. If you do not know how to do that, find a resource at your organization that does. In addition to a privacy officer, HIPAA requires designation of a security official, who should be able to help assure appropriate data protection. It is important to note that HIPAA’s requirements include reporting of security breaches and data exposures. In addition to notifying affected individuals, HHS must be notified of exposures of PHI; in addition to potentially triggering an investigation, exposures involving more than 500 persons are posted on the HHS “Breach Portal” website for all the world to see. State laws may also include breach-reporting requirements. Conclusion Although the specifics are lengthy, the net administrative burden that HIPAA adds to existing Common Rule and FDA regulations is generally not a large one. Compared to protocol approval generally -- and the details of informed consent particularly -- a HIPAA authorization is relatively easy. Additionally, as noted, there are several pathways around the authorization requirement. To approve a study under the Common Rule and FDA requirements, IRBs have long been required to determine that there are adequate provisions to protect the privacy of subjects and to maintain the confidentiality of data. Where researchers are meeting those requirements, HIPAA should change very little beyond the additional "paperwork." As noted, HIPAA applies to covered entities and their business associates, and to the PHI that originates in or by them. Research conducted by organizations that do not qualify as such, using data that does not derive from any covered entity source, is not reached by HIPAA. In such cases, the requirements of the Common Rule and FDA remain as protections for human subjects' privacy and other interests. The issue then is not "PHI" but what the Common Rule defines as identifiable "private information." Here are the key points: 1. HIPAA privacy protections supplement those of other federal regulations (viz., the Common Rule and FDA), state law, and certification/accreditation requirements. 2. HIPAA protects identifiable health information (PHI) originating or held in covered entities or their business associates. De-identified data is not protected, and not all identifiable health information is considered PHI either. 3. Under HIPAA, research activity using PHI generally requires authorization. However, there are several alternatives that allow bypassing the authorization requirement. 4. Minimum necessary standards, disclosure accounting requirements, and the characteristics of authorizations (when required) must be understood by researchers when HIPAA applies. 5. Privacy protection includes a commitment to data security throughout the lifecycle of your data. 6. If you are unsure about the particulars at your organization or have questions, consult with your organization’s IRB, Privacy Board, or privacy official. For data security issues, consult with your organization’s security official. Acknowledgements The author would like to thank the following individuals for their editorial and content review of this and prior versions: Jaime Arango, Evelyne Bital, Helenemarie Blake, Joey Casanova, Anita Cava, Amanda Coltes-Rojas, Ken Goodman, Karen Hansen, Margaret Rankovic, Daniel Smith, and Sally Mann. References Additional Resources Lustre Useful commands on lustre: - lfs df system configuration information - lfs find[directory | file name] find a file or directory - lfs quota -u$LOGNAME /login/beagle display quota Striping Useful commands for striping: ... - `lfs setstripe` create a file or directory with a specific striping pattern - `lfs getstripe` display file striping patterns To find more about it use: `man lfs` - The default striping is 2: each file created is split across 2 OSTs (potentially double read/write bandwidth) - Usually good values are between one and four. - Striping can be set either on file or directory level. - Cannot change the stripe pattern on an existing file. - Can change the stripe pattern on a directory. - Stripe must be set on a directory before files in it are created. - New files inherit the striping of the parent directory. **NOTE:** Striping over too many OSTs will cause unnecessary overhead and lead to a loss in performance! We do NOT recommend changing striping settings unless you absolutely know what you are doing. Striping config is already set to Cray recommendations for a volume of that size. ### How to move data to and from Beagle **Beagle is not HIPAA-compliant — do not put PHI (Protected Health Information) data on Beagle2!!!** Make sure that you are properly handling PHI data, the consequences of mishandling could be considerable both for your and for the institutions you work for. **Factors for choosing a data movement tool:** - Make sure you have permission to move such data from its source to its target if you are not the owner or the sole owner. - Consider carefully the structure of Beagle's filesystem before deciding where you move your data: - Relatively small files (say < 1 GB) that should be considered permanent: `/home/<username>` (disk quota 10 GB). - Larger data to be used for calculations, but which does not need to be backed up locally: `/lustre/beagle2/` (currently there is no disk quota). **Recommended data movement tools:** - `scp/sftp` - quick to initiate but - slow and not scalable. - **Globus Online** - Provides high-performance and is easy to use from either a command line or web browser. - Provides fault tolerant, fire-and-forget transfers. - For moving larger data. - When `scp` is too slow/unreliable - **Globus Online** See also **Globus Tools and Grid Services** **Globus Online** addresses the challenges faced by researchers in moving, sharing, and archiving large volumes of data among distributed sites. With Globus Online, you hand-off data movement tasks to a hosted service that manages the entire operation, monitoring performance and errors, retrying failed transfers, correcting problems automatically whenever possible, and reporting status to keep you informed so that you can focus on your research. Command line and web-based interfaces are available. The command line interface, which requires only ssh to be installed on the client, is the method of choice for script-based workflows. Globus Online also has a REST-style transfer API. After you register, simply use the **Beagle2 endpoint** as well as other sources or destinations. The Beagle2 endpoints server nodes are tuned especially for WAN data movement tasks. With a growing collection of Globus Online endpoints you'll be using the highest performing WAN-tuned systems with simplicity. **By default any file transfer command will be initiated on the service/login in node.** The user can also bundle commands into a batch script and submit it to the scheduler. Users can also build multiple batch scripts with job dependency to move data to the machine using a few processors, run the jobs with a lot of processors, and then move the results off the machine. Here's an example of a batch script. ```bash #!/bin/bash JOB1=`qsub -l mppwidth=1 copy_input.pbs` JOB2=`qsub -l mppwidth=128 -W depend=afterok:$JOB1 run.pbs` JOB3=`qsub -l mppwidth=1 -W depend=afterok:$JOB2 copy_results.pbs` ``` ### How to submit jobs Projects A valid HPC project is required to submit jobs. To join an HPC project visit http://www.ci.uchicago.edu/hpc/projects projects to check whether or not you're a member of a project, to see what projects you're a member of (do this when you login on Beagle). projects --available will tell you which are the projects that are available for your use. projects --set my_project_code: to set one of the projects that are available to you as your default project. Basics about job submission on Beagle2 To run a batch job on Beagle2: 1. Prepare a PBS script that specifies the application you want to run and the resources it will require. Note: Your application's executable line must start with one of the application launch commands (aprun for ESM jobs; ccmrun for CCM jobs). 2. Submit your job PBS script using the TORQUE qsub command. 3. Monitor your job’s progress using the TORQUE qstat command, the Moab showq command - When jobs are executed, they are allocated at least one node. Each node has 32 cores on Beagle2. - If a user wants to run a different computation on each of the cores of a node Swift scripting language should be used. Swift web site - We are using PBS scripts with Moab (scheduler), see HPC Scheduling and Torque (resource manager), see HPC Job Management - PBS script consist of: PBS directives, comments and executable statements (aprun). - Every executable needs to be initiated by the aprun command. - It is necessary to properly match your aprun parameters with your PBS parameters. - qsub on Beagle2, simply reserves the node(s) for your usage but the command in your batch script will still be running on a login node. - In order to actually run on the compute nodes qsub has reserved for you, you must use aprun - Job_ID is assigned after the qsub command is executed. Use it to control your job! - Batch jobs are submitted using the qsub command, e.g., qsub/myjob.pbs, where myjob.pbs is a script that will be described below. Reservations: Jobs can be sent either to the queues available on Beagle2 or users can ask for reservations: nodes specifically set aside for a task. In general reservations are awarded when a job has specific needs that cannot be easily met with the standard queues. To request a reservation is necessary to send an email to beagle-support@ci.uchicago.edu Job Submission Best Practices How many tasks per node? -- On Beagle2 the number of cores per node is 32. Take this into account when submitting jobs. What if tasks are memory intensive? -- Each compute node has 64GB, and 32 cores. If the memory requirements for your tasks are in terms of Gigabytes request much less than 32 tasks per node. How much wall-time to request? -- Try to request relatively smaller walltime for your jobs. Scheduler employs a technique called backfilling that may be advantageous for shorter walltimed jobs. If the application is a long running one then a checkpointing mechanism could be used to submits fragments of application. Batch jobs Commands for submitting and inquiring about jobs Batch jobs are controlled by PBS (batch) scripts written by the user and submitted to a batch system that manages the compute resource and schedules the job to run based on a set of policies. NOTE: job_id, the numerical identifier associated with a batch job, is assigned after the qsub command is executed. - qsub batch jobs are submitted using the qsub command, e.g., qsub myjob.pbs, where myjob.pbs is a script that will be described below. - qdel job_id to delete a job. Users can only delete their own jobs. - qhold job_id to request that the scheduler place one or more holds on a job. A job that has a hold is not eligible for execution (just for jobs which user owns) - qrls job_id to request that the scheduler place one or more holds on a job. A job that has a hold is not eligible for execution (just for jobs which user owns) - qalter new_options job_id to modify the job's attributes. If any of the specified attributes cannot be modified for a job, none of that job's attributes will be modified. - qmove new_queue_id job_id to move a job from one queue type to another one. - qstat shows the jobs the resource manager, Torque, knows about (i.e., all those submitted using qsub). - qstat -a show all jobs in submit order - qstat --username show all jobs of a specific user in submit order - qstat --j job_id receive a detailed report on the job status - qstat --n job_id what nodes is a job running on - qstat --g gives the list of the queues available on Beagle2 - showq shows all jobs in priority order. Tells which jobs Moab, the scheduler, is considering eligible to run or is running. - showres shows all the reservations currently in place or that have been scheduled (e.g., maintenance reservations, training reservations and specific user reservations) See Adaptive Computing: showres for more details. - showbf shows what resources are available for immediate use as backfill. See Adaptive Computing: showbf for more details. • **showstart** displays the estimated start time of a job. It is important to realize that this prediction is not strictly deterministic because jobs can be done earlier than forecasted. The command always assumes the job is the next to run, so it’s only useful for the top job in queue. See Adaptative Computing: showstart for more details. **NOTE:** The behaviors of all these commands can be affected by the use of command line arguments, see the man pages for more details, e.g., but typing `man qsub` for the `qsub` command when logged in on Beagle2. For more Moab commands and their descriptions, see the Adaptive Computing Scheduler Commands page **To submit batch job:** From the directory that contains the script file, type: ``` qsub myjob.pbs ``` **NOTE:** Scripts submitted via qsub use default bash shells, so you need to make sure you load modules or set any environmental variables you use in the submit script. **PBS (batch) scripts** A PBS job script is a text file you prepare that specifies which application to run and the resources required to run it. A detailed FAQ about PBS scripts is available from HPC Job Management where users can learn the basics of building their scripts. **Note:** The TORQUE directives in your PBS script must precede your executable lines (lines that begin with one of the application launch commands `aprun` for ESM jobs, `ccmrun` for CCM jobs, or `module load` commands); if directives occur on subsequent lines, they will be ignored. More specifically to Beagle these are some of the instructions that can be given: ```bash #PBS -A my_project_code to set the project to which this run will be charged #PBS -N job_name #PBS -l mppwidth=nodes*cores_per_node is the number of processing elements (instance of an executable) requested and corresponds to the number of MPI or executable tasks. Default is one. #PBS -l mppdepth=threads_per_MPI_task . Default is one. Use for OpenMP. The number cannot be larger than the number of cores per node (32). In some situations multiple threads can be run on same core, see Cray Doc:aprun or type man aprun for details. **NOTE:** It is necessary to add `setenv OMP_NUM_THREADS=<number_of_threads>` in the PBS script before the aprun flags openMP program line, if using openMP. #PBS -l mppnppn=Number of processing elements (or MPI tasks) per node. PE is one instance of an executable propagated by the Application Level Placement Scheduler. Using a smaller mppnppn number will result in fewer MPI tasks or executables (to run multiple executables per node scripts are necessary) being scheduled per node. That will give each core/PE more memory, but leave cores unused on the node, or allowing for mixed MPI/openMP executables (multiple openMP threads on multiple cores per MPI task). **NOTE:** We recommend you not to use the directive mppnppn in the batch script. If you want less than the default 32 MPI tasks per node or use OpenMP you should request all 32 cores on the desired number of nodes with the mppwidth parameter and with aprun -N you will specify the number of cores per node. #PBS -l walltime=hh:mm:ss, i.e., in hours, minutes and seconds. Be mindful that specific queues might not allow all job-time lengths. #PBS -q queue_name, to submit a job to a specific queue (use qstat -q to find which are the available queues). Batch is the default queue. #PBS -o job_outputfile_name to connect as specific file to the output of the PBS script #PBS -j oe join output and error file. #PBS -l advres=res_id, if a user is running a job that requires a reservation. In order to send computations that run on it it is necessary to add this line to the PBS script. #PBS -V Please don't use this option! This can propagate large numbers of environment variable settings from the submitting shell into a job. In the script, these instructions can be followed by other instructions and in the end by the `aprun` command, to run the executable. Otherwise you would be attempting to run your calculations on a login node and not on the reserved compute nodes! For pure MPI scripts running a single program the total number of nodes requested is the number of PEs requested divided the number of PEs per node and rounded up. For MPI/OpenMP tasks the total number of nodes will be ceiling (mppdepth*mppwidth/32). Type `man aprun` for details. NOTE: Since Moab assigns entire nodes to jobs, the total number of cores requested should be a multiple of 32. If it is smaller, Moab will effectively round it up to the closest multiple of 32 in the sense of locking up those resources. Type `man pbs_resources` when logged into Beagle for more information/more option. Example of PBS script: ```bash #!/bin/bash #PBS -N myjob #PBS -l walltime=10:00:00 #PBS -l mppwidth=544 ## ceiling ((100 (tasks)/6(tasks per node))*32(total cores per node)=544 #PBS -j oe ## join standard output and standard error-recommended! ./opt/modules/default/init/bash cd $PBS_O_WORKDIR aprun -n 100 -N 6 ./myexecutable ``` - Job directive lines begin with #PBS. These directives tell the batch system how many nodes to reserve for your job and how long to reserve those nodes. - $PBS_O_WORKDIR holds the path to the directory from which you submitted your job. While not required, most batch scripts have "cd $PBS_O_WORKDIR" as the first command after the directives. - The aprun command is used to start execution of your code on Beagle2's compute nodes. - Remember you can request up to 500 compute nodes for your batch jobs. NOTE: All options may be specified as either (1) qsub command-line options (see below) or (2) as directives in the batch script as #PBS options (for batch jobs). We recommend putting your directives (options) in the script instead. Then you will have a record of the directives you used, which is useful for record-keeping as well as debugging should something go wrong. ### Aprun All codes that execute on Beagle2's compute nodes must be started with the "aprun" command. Without the "aprun" command, the code will run (if it runs at all) on the shared MOM node that executes your batch job commands. To run aprun similar instructions should be used as given to the PBS script for qsub. Here are the equivalent aprun options. <table> <thead> <tr> <th>aprun Option</th> <th>qsub -l Option</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>-n NMPI</td> <td>-l mppwidth=nodes*cores_per_node</td> <td>Width (number of PEs). Number of MPI tasks. There is 32 cores per node on Beagle2.</td> </tr> <tr> <td>-d mm</td> <td>-l mppdepth=threads_per_MPI_task</td> <td>Depth (The number of threads to run for each PE). Number of OpenMP threads per MPI task. For OpenMP job you must also set the environment variable OMP_NUM_THREADS to this same value. Make sure that this value multiplied by the value for -N does not exceed 32.</td> </tr> </tbody> </table> -N NPEs -l mppnppn=MPI_tasks_per_node Number of PEs per node. Number of MPI tasks to run on each node. -B Reuse the width, depth, nppn and memory specified with qsub: no need to specify aprun options -n, -d, -N, and -m; aprun will exit with an error if the user specifies these with the -B optio -S Specifies the number of PEs to allocate per NUMA node. You'll get better performance if you distribute your MPI tasks among the 4 NUMA nodes (each NUMA node has 8 cores). Value can be 1-8. Default is 8. Example of batch script for running an MPI/OpenMP code using 6 nodes: ``` #!/bin/bash #PBS -l mppwidth=256 #PBS -l walltime=1:00:00 /opt/modules/default/init/bash cd $PBS_O_WORKDIR export OMP_NUM_THREADS=8 aprun -n 32 -N 4 -d 8 -S 1 ./myjob ``` Memory usage Our compute nodes have 64 GB of physical memory (2GB per core), but not all the memory is available to user programs. “System overhead” requires memory to run the node, and message passing library buffers all consume memory, as does loading the executable into the memory. Thus the precise memory available to an application varies. So if you are using all 32 cores per node, you will get a bit less than 2 GB per MPI task on average. If you see an error message, "OOM killer terminated this process." in your job output, it means that your code has exhausted the memory available on the node (OOM stands for "out of memory"). One simple thing you can try when your code runs into an OOM error is to use more nodes and fewer cores per node. You can choose to launch fewer than 32 tasks per node to increase the memory available for each MPI task. Note that your account will be charged for all 32 cores per node, regardless of how many cores you actually use. For aprun options refer to our wiki page or man page. https://wiki.uchicago.edu/display/Beagle/Getting+started%3A+performing+basic+operations+on+Beagle2 https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts For example if you would like to run 64 MPI tasks and use only 16 cores per compute node: ``` #PBS -l mppwidth=128 aprun -n 64 –N 16 –S 3 ./a.out ``` This example uses #PBS -l mppwidth=128 because 128 cores are required and this number must be multiple of 32 (64 MPI tasks / 16 tasks used per compute node X 32 cores per compute node). Use the –S 3 option to place the 16 MPI tasks per compute node on cores from all four NUMA nodes to ensure best performance and access to all compute node memory. We need this option because the default is for aprun to pack the NUMA nodes, meaning 16 tasks on just two NUMA nodes. Where -S Specifies the number of PEs to allocate per NUMA node. Each NUMA node has 8 cores. Value for S can be 1-8. Default is 8. If you are using OpenMP please refer to this page: https://wiki.uchicago.edu/display/Beagle/Examples+of+PBS+scripts For more information see the CrayDoc page http://docs.cray.com/cgi-bin/craydoc.cgi?mode=Show&q=f=man/alpsm/31/cat1/aprun.1.html or type man aprun. Running Swift on Beagle2 Swift is now installed on Beagle2 as a module. Swift supports a many-task computing environment for Beagle2. In this model, Swift scripts and the Swift runtime are used to submit and manage large numbers of small process executions on Beagle2's massive number of cores. Swift is able to do this without overloading the Beagle2 scheduler by using a user space scheduler called Coasters. - The Swift web site is here. - Swift documentation is here. - To get started with Swift on Beagle2 follow the steps outlined here. Additional resources: - Workload Management and Application Placement for the Cray Linux Environment from CrayDoc - HPC Scheduling and HPC Job Management on job management In case you need help/support - please email beagle-support@lists.uchicago.edu. This will create a ticket in our ticketing system so that we can best track and resolve your issues.
{"Source-Url": "https://wiki.uchicago.edu/download/temp/pdfexport-20190512-120519-0547-2001/Beagle-143561327-120519-0547-2002.pdf?contentType=application/pdf", "len_cl100k_base": 10179, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 35449, "total-output-tokens": 11384, "length": "2e13", "weborganizer": {"__label__adult": 0.0008273124694824219, "__label__art_design": 0.0007944107055664062, "__label__crime_law": 0.002567291259765625, "__label__education_jobs": 0.0249786376953125, "__label__entertainment": 0.00017976760864257812, "__label__fashion_beauty": 0.00042629241943359375, "__label__finance_business": 0.0031147003173828125, "__label__food_dining": 0.0007543563842773438, "__label__games": 0.00121307373046875, "__label__hardware": 0.0046539306640625, "__label__health": 0.044525146484375, "__label__history": 0.0005812644958496094, "__label__home_hobbies": 0.0006461143493652344, "__label__industrial": 0.0012874603271484375, "__label__literature": 0.0005321502685546875, "__label__politics": 0.0008187294006347656, "__label__religion": 0.0008130073547363281, "__label__science_tech": 0.3037109375, "__label__social_life": 0.0005707740783691406, "__label__software": 0.251708984375, "__label__software_dev": 0.353515625, "__label__sports_fitness": 0.0006895065307617188, "__label__transportation": 0.0006470680236816406, "__label__travel": 0.00041294097900390625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49147, 0.00923]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49147, 0.44175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49147, 0.90294]], "google_gemma-3-12b-it_contains_pii": [[0, 2233, false], [2233, 5688, null], [5688, 9005, null], [9005, 13678, null], [13678, 17521, null], [17521, 22545, null], [22545, 26285, null], [26285, 29691, null], [29691, 33463, null], [33463, 34318, null], [34318, 38488, null], [38488, 42308, null], [42308, 45283, null], [45283, 48246, null], [48246, 49147, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2233, true], [2233, 5688, null], [5688, 9005, null], [9005, 13678, null], [13678, 17521, null], [17521, 22545, null], [22545, 26285, null], [26285, 29691, null], [29691, 33463, null], [33463, 34318, null], [34318, 38488, null], [38488, 42308, null], [42308, 45283, null], [45283, 48246, null], [48246, 49147, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49147, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49147, null]], "pdf_page_numbers": [[0, 2233, 1], [2233, 5688, 2], [5688, 9005, 3], [9005, 13678, 4], [13678, 17521, 5], [17521, 22545, 6], [22545, 26285, 7], [26285, 29691, 8], [29691, 33463, 9], [33463, 34318, 10], [34318, 38488, 11], [38488, 42308, 12], [42308, 45283, 13], [45283, 48246, 14], [48246, 49147, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49147, 0.01018]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
b4f6096989603782ba8ad4ff01e9f18264cdcf78
Computational Logic The (ISO-)Prolog Programming Language (ISO-)Prolog - A practical programming language based on the logic programming paradigm. - Main differences with “pure” logic programming: - depth-first search rule (also, left-to-right control rule), - more control of the execution flow, - many useful pre-defined predicates (some not declarative, for efficiency), - higher-order and meta-logical capabilities, ... - Advantages: - it can be compiled into fast and efficient code (including native code), - more expressive power, - industry standard (ISO-Prolog), - mature implementations with modules, graphical environments, interfaces, ... - Drawbacks of “classical” systems (and how addressed by modern systems): - Depth-first search rule is efficient but can lead to incompleteness → alternative search strategies (e.g., Ciao’s bfall, tabling, etc.). - No occur check in unification (which led to unsoundness in older systems) → support regular (i.e., infinite) trees: \( X = f(X) \) (already constraint-LP). Programming Interface (Writing and Running Programs) - **Not** specified in the ISO-Prolog language standard. - Is left to each particular system implementing the standard. - This typically includes issues such as: - User interaction (top-level, GUI, etc.). - Interpreter(s). - Compiler(s). - Debugger(s). - *(Module system.)* - Different Prolog systems offer different facilities for these purposes. - See the part on Developing Programs with a Logic Programming System for more details for the particular system used in the course (Ciao). Comparison with Imperative and Functional Languages - **Programs without search** (that do not perform “deep” backtracking): - Generally (if no disjunction etc. used) this means programs that: * Have only one clause per procedure, or * if several clauses, only one of them selected for every call to that predicate. Note that this is *dependent on call mode*, i.e., which variables are bound on a given call. - Because of the left-to-right rule, these programs *run in Prolog similarly to their imperative and (strict) functional counterparts*. - Imperative/functional programs can be directly expressed as such programs. - **Programs with search** (perform “deep” backtracking): - These are programs that have at least one procedure that: * has multiple clauses, and * more than one of them is selected for some calls to that procedure. Again, this is *dependent on call mode*. - These programs *perform search* (backtracking-based, or other search rules). - They have no *direct* counterparts in imperative or functional programming. Comparison with Imperative and Functional Languages (Contd.) • Conventional languages and Prolog both implement *(forward) continuations*: the place to go after a procedure call *succeeds*. I.e., in: \[ p(X, Y) :- q(X, Z), r(Z, Y). q(X, Z) :- \ldots \] when the procedure call to \( q/2 \) finishes (with “success”), execution continues in \( p/2 \), just after the call to \( q/2 \), i.e., at the call to \( r/2 \) (the *forward continuation*). • In Prolog, *when there are procedures with multiple definitions*, there is also a *backward continuation*: the place to go to if there is a *failure*. I.e., in: \[ p(X, Y) :- q(X, Z), r(Z, Y). p(X, Y) :- \ldots q(X, Z) :- \ldots \] if the call to \( q/2 \) succeeds, it is as above, but if it fails, execution continues at (“backtracks to”) the *previous alternative*: the second clause of \( p/2 \) (the *backward continuation*). • We say that \( p/2 \) has a **choice point**. • Again, the debugger (see later) can be useful to observe how execution proceeds. The ISO Standard (Overview) - Syntax (incl. operators) and operational semantics - Arithmetic - Checking basic types and state - Structure inspection and term comparison - Input/Output - Pruning operators (cut) - Meta-calls, higher-order, aggregation predicates - Negation as failure, cut-fail - Dynamic program modification - Meta-interpreters - Incomplete data structures - Exception handling Additionally (not in standard): - Definite Clause Grammars (DCGs): parsing Prolog Syntax and Terminology • Variables and constants as before: ◇ Variables (start with capital or _): X, Value, A, A1, _3, _result. ◇ Constants (start w/small letter or in ’ ’): x, =, [], 'Algol-3', 'Don’t’. Note: in Prolog terminology constants are also referred to as “atoms.” • Numbers: 0, 999, -77, 5.23, 0.23e-5, 0.23E-5. Infinite precision integers supported by many current systems (e.g., Ciao). • Strings (of “codes”): "Prolog" = [80, 114, 111, 108, 111, 103] (list of ASCII character codes). ◇ Note: if ?- set_prolog_flag(write_strings, on). character lists are printed as strings: " ". • Comments: ◇ Using %: rest of line is a comment. ◇ Using /* ... */: everything in between is a comment. Prolog Syntax — Defining Operators - Certain functors and predicate symbols are *predefined* as infix, prefix, or postfix *operators*, aside from the standard term notation, and *new ones can be added*. - Very useful to make programs and data files more readable! - Stated using *operator declarations*: ```prolog :- op(< precedence >, < type >, < operator(s) >). ``` - `< precedence >`: is an integer from 1 to 1200. General rule: the operator with the *highest* precedence number is the principal functor. E.g., if ‘+’ has *higher* precedence number than ‘/’, then \[ a+b/c \equiv a+(b/c) \equiv +(a,/(b,c)). \] Alternatively, we can always use parenthesis: \[ /(+(a,b),c) \equiv (a+b)/c \] (Note that in some other languages the ordering of precedence values is the opposite.) - `< type >`: * infix: \( xfx \) (not associative), \( xfy \) (right associative), \( yfx \) (left associative). * prefix: \( fx \) (non-associative), \( fy \) (associative). * postfix: \( xf \) (non-associative), \( yf \) (associative). - `< operator(s) >`: can be a single atom or a list of atoms. • Examples: <table> <thead> <tr> <th>Standard Notation</th> <th>Operator Notation</th> </tr> </thead> <tbody> <tr> <td>a + b / c</td> <td>a + b / c</td> </tr> <tr> <td>X is 34 mod 7</td> <td>X is 34 mod 7</td> </tr> <tr> <td>3 + 4 &lt; 8</td> <td>3 + 4 &lt; 8</td> </tr> <tr> <td>X = f(Y)</td> <td>X = f(Y)</td> </tr> <tr> <td>-3</td> <td>-3</td> </tr> <tr> <td>spy foo/3</td> <td>spy foo/3</td> </tr> <tr> <td>p(X) :- q(Y)</td> <td>p(X) :- q(Y)</td> </tr> <tr> <td>p(X) :- q(Y), r(Z))</td> <td>p(X) :- q(Y), r(Z))</td> </tr> </tbody> </table> • Note that, with this syntax convention, Prolog clauses are also Prolog terms! • Parenthesis must always be used for operators with higher priority than 1000 (i.e., the priority of ','): ... ... assert( (p :- q)), ... • Operators are local to modules (explained later). • Typical standard operators: ```prolog :- op( 1200, xfx, [ :-, --> ]). :- op( 1200, fx, [ :-, ?- ]). :- op( 1150, fx, [ mode, public, dynamic, multifile, block, meta_predicate, parallel, sequential ]). :- op( 1100, xfy, [ ; ]). :- op( 1050, xfy, [ -> ]). :- op( 1000, xfy, [ ' ' ]). :- op( 900, fy, [ \+, spy, nospy ]). :- op( 700, xfx, [ =, is, =., ==, \==, @<, @>, @=<, @>=, =:=, =\=, <, >, =<, >\= ]). :- op( 550, xfy, [ : ]). :- op( 500, yfx, [ +, - ], #, \/, \// ]). :- op( 500, fx, [ +, - ]). :- op( 400, yfx, [ *, /, //, <<, >> ]). :- op( 300, xfx, [ mod ]). :- op( 200, xfy, [ ^ ]). ``` The Execution Mechanism of (classical) Prolog - Always execute calls in the body of clauses left-to-right. - When entering a procedure, if several clauses unify (a choice point), take the first unifying clause (i.e., the leftmost unexplored branch). - On failure, backtrack to the next unexplored clause of the last choice point. ``` grandparent(C,G) :- parent(C,P), parent(P,G). parent(C,P) :- father(C,P). parent(C,P) :- mother(C,P). father(charles,philip). father(ana,george). mother(charles,ana). ``` - Check how Prolog explores this tree by running the debugger! Built-in Arithmetic - Practicality: interface to the underlying CPU arithmetic capabilities. - These arithmetic operations are not as general as their logical counterparts. - Interface: evaluator of arithmetic terms. - The type of arithmetic terms (arithexpr/1 – in next slide): - a number is an arithmetic term, - if \( f \) is an \( n \)-ary arithmetic functor and \( X_1, ..., X_n \) are arithmetic terms then \( f(X_1, ..., X_n) \) is an arithmetic term. - Arithmetic functors: \(+, -, *, /\) (float quotient), \(//\) (integer quotient), \(\text{mod}\), ... (see later). Examples: - \((3*X+Y)/Z\), correct if *when evaluated* \(X, Y\) and \(Z\) are arithmetic terms, otherwise it will raise an error. - \(a+3*X\) raises an error (because \(a\) is not an arithmetic term). Built-in Arithmetic (Contd.) – The arithexpr type arithexpr := | num | + arithexpr | ++ arithexpr | arithexpr + arithexpr | arithexpr * arithexpr | arithexpr / arithexpr | sign(arithexpr) | float(arithexpr) | arithexpr ** arithexpr | log(arithexpr) | sin(arithexpr) | atan(arithexpr) | [arithexpr] |... Other arithmetic operators that can appear in arithmetic expressions (see manuals): • rem, mod, gcd, >>, <<, \, \|, \\, #, ... • integer, truncate, floor, round, ceiling, ... Built-in Arithmetic (Contd.) • Built-in arithmetic predicates: ◇ \texttt{Z is X} \texttt{X} (which must be an arithmetic term) is \textit{evaluated} and result is unified with \texttt{Z}. ◇ the usual \texttt{<, >, \leq, \geq, =:=} (arithmetic equal), \texttt{\neq} (arithmetic not equal), ... \textit{Both arguments are evaluated} (as in \texttt{is/2}) and their results are compared. • Examples: ◇ \texttt{X is 3+3//2.} \quad X = 4 ◇ \texttt{Z = 3//2, X is 3+Z.} \quad X = 4 • Examples of failure and errors: ◇ \texttt{X=3, Y=4, Y<X+1.} \quad \text{fail (the system will backtrack).} ◇ \texttt{X=3, Y=4, X is Y+1.} \quad \text{fail (the system will backtrack).} ◇ \texttt{X=3, Y=4, X =:= Y.} \quad \text{fail (the system will backtrack).} ◇ \texttt{Y=4, Y<a+1.} \quad \text{throws error (the system will abort).} ◇ \texttt{X is Z+1.} \quad \text{throws error (the system will abort).} ◇ \texttt{X=3, X =:= f(a).} \quad \text{throws error (the system will abort).} Arithmetic Programs - \texttt{plus(X,Y,Z):- Z is X + Y.} - Only works in one direction (X and Y bound to arithmetic terms). - Meta-logical tests (see later) allow using it in both directions. - We have lost the recursive structure of the numbers. - But we have won (a lot) in performance! - Factorial: Using Peano arithmetic: \[ \begin{align*} \text{factorial}(0, \text{s}(0)). \\ \text{factorial}(\text{s}(N), F) & :- \\ & \text{factorial}(N, F1), \\ & \text{times}(\text{s}(N), F1, F). \end{align*} \] Using Prolog \texttt{is/2}: \[ \begin{align*} \text{factorial}(0, 1). \\ \text{factorial}(N, F) & :- \\ & N > 0, \\ & N1 \text{ is } N-1, \\ & \text{factorial}(N1, F1), \\ & F \text{ is } F1 \times N. \end{align*} \] - Wrong goal order can raise an error (e.g., moving last call to \texttt{is/2} before call to \texttt{factorial}). Dynamic Checking of Basic Types - Unary relations which *check* the type of a term: - $\text{integer}(X)$ - $\text{float}(X)$ - $\text{number}(X)$ - $\text{atom}(X)$ (nonvariable term of arity 0 other than a number) - $\text{atomic}(X)$ - ... - They behave as if defined by a (possibly infinite) table of facts (in part, see below). - They either succeed or fail, but do not produce an error. - Thus, they cannot be used to *generate* (e.g., if argument is a variable, they fail instead of instantiating it to possible values). - This behaviour is outside first order logic because it allows checking the instantiation state of a variable. Example: implementing a better behavior for plus/3: ``` plus(X,Y,Z) :- number(X), number(Y), Z is X + Y. plus(X,Y,Z) :- number(X), number(Z), Y is Z - X. plus(X,Y,Z) :- number(Y), number(Z), X is Z - Y. ``` Then: ``` ?- plus(3,Y,5). Y = 2 ? ``` Still, it cannot be used to partition a number into two others: ``` ?- plus(X,Y,5). no ``` (in fact, this should raise an error, rather than simply failing). Structure Inspection - **functor**\((X, F, A)\): - ◊ X is a term \(f(X_1, \ldots, X_n)\) \(\rightarrow\) \(F=f\) \(A=n\) - ◊ F is the atom \(f\) and A is the integer \(n\) \(\rightarrow\) \(X = f(X_1, \ldots, X_n)\) - ◊ Error if \(X\), and either \(F\) or \(A\) are variables - ◊ Fails if the unification fails, \(A\) is not an integer, or \(F\) is not an atom Examples: - **functor**\((t(b,a),F,A)\) \(\rightarrow\) \(F=t\) \(A=2\). - **functor**\((Term,f,3)\) \(\rightarrow\) \(Term =f(_,_,_).\) - **functor**\((Vector,v,100)\) \(\rightarrow\) \(Vector =v(_,\ldots,_)\). (Note: in some systems functor arity is limited to 256 by default; there are libraries that allow unbounded arrays.) Structure Inspection (Contd.) - \texttt{arg}(N, X, \texttt{Arg}): - \(N\) integer, \(X\) compound term \(\rightarrow\) \texttt{Arg} unified with \(N\)-th argument of \(X\). - Allows accessing a structure argument in constant time and in a compact way. - Error if \(N\) is not an integer, or if \(X\) is a free variable. - Fails if the unification fails. Examples: \[ \text{?- } _T=\text{date}(9,'February',1947), \texttt{arg}(3,_T,X). \] \[X = 1947\] \[ \text{?- } _T=\text{date}(9,'February',1947), _T=\text{date}(_,_,X). \] \[X = 1947\] \[ \text{?- } \texttt{functor}(\text{Array},\text{array},5), \quad \texttt{arg}(1,\text{Array},\text{black}), \quad \texttt{arg}(5,\text{Array},\text{white}). \] \[\text{Array} = \text{array}(\text{black},_,_,_,\text{white}).\] - What does \texttt{- arg}(2,[a,b,c,d],X) return? Example of Structure Inspection: Arrays - Define \texttt{add\_arrays(A,B,C)}: \[ \text{add\_arrays(A,B,C)} \rightleftarrows \text{functor}(A,\text{array},N), \text{functor}(B,\text{array},N), \text{functor}(C,\text{array},N), \\ \text{add\_elements(N,A,B,C)}. \] \[ \text{add\_elements(0,\_,\_,\_)}. \\ \text{add\_elements(I,A,B,C)} \rightleftarrows \\ \text{I}\geq 0, \text{arg}(I,A,AI), \text{arg}(I,B,BI), \text{arg}(I,C,CI), \\ \text{CI is } AI + BI, \text{I1 is } I - 1, \\ \text{add\_elements(I1,A,B,C)}. \] - Alternative, using lists instead of structures: \[ \text{add\_arrays\_lists([],[],[])}. \] \[ \text{add\_arrays\_lists([X|Xs],[Y|Ys],[Z|Zs])} \rightleftarrows \\ \text{Z is } X + Y, \\ \text{add\_arrays\_lists(Xs,Ys,Zs)}. \] - In the latter case, where do we check that the three lists are of equal length? Defining some syntactic sugar for `arg/3`: ``` :- use_package(functional). :- op(250,xfx,@). % Define @ as an infix operator :- fun_eval '@'/2. % Call @ when it appears as a term (no need for ˜) @(T,N) := A :- arg(N,T,A). % Define @ as simply calling arg/3 ``` Now we can write `add_elements/4` as: ``` add_elements(0,_,_,_,_). add_elements(I,A,B,C) :- I>0, C@I is A@I + B@I, add_elements(I-1,A,B,C). ``` Example of Structure Inspection: Subterms of a Term (Version 1) - Define \texttt{subterm(Sub,Term)}: \begin{verbatim} subterm(Term,Term). % a) A term is always a subterm of itself subterm(Sub,Term):- % b) The arguments are also subterms: functor(Term,_F,N),% N is number of arguments of Term n_to_one(N, J), % J is a natural between N and 1 arg(J,Term,Arg), % Arg is the J-th argument of Term subterm(Sub,Arg).% Sub are the subterms of Arg \end{verbatim} \begin{verbatim} n_to_one(N, N) :- N > 0. n_to_one(N, X) :- N > 1, N1 is N-1, n_to_one(N1, X). \end{verbatim} - Some queries: \begin{verbatim} ?- subterm1( f(a) , g(b,f(a)) ). ?- subterm1( f(b) , g(b,f(a)) ). ?- subterm1( g(b,f(a)) , g(b,f(a)) ). ?- subterm1( X , g(b,f(a)) ). ?- subterm1( f(X) , g(b,f(a)) ). ?- subterm1( X , g(X,f(a)) ). ?- subterm1( f(X) , g(b,f(X)) ). \end{verbatim} Example of Structure Inspection: Subterms of a Term (Version 2) - Define \texttt{subterm(Sub,Term)}: \begin{verbatim} subterm(Term,Term). % a) A term is always a subterm of itself subterm(Sub,Term) :- % b) The arguments are also subterms: functor(Term,F,N),% N is number of arguments of Term subterm_args(N,Sub,Term). % Iterate over the N arguments subterm_args(N,Sub,Term) :- % a) Check current argument arg(N,Term,Arg), % Also checks N>0 (arg/3 fails if N=<0) subterm(Sub,Arg). subterm_args(N,Sub,Term) :- % b) Check next argument N>1, N1 is N-1, subterm_args(N1,Sub,Term). \end{verbatim} Higher-Order Structure Inspection • **T = ..L** (read as “univ”) ○ **L** is the decomposition of a term **T** into a list comprising its principal functor followed by its arguments. ```prolog ?- date(9,february,1947) =.. L. L = [date,9,february,1947]. ?- _F = '+', X =.. [_F,a,b]. X = a + b. ``` ○ Allows implementing higher-order primitives (see later). Example (extending derivative): ```prolog deriv(sin(X),X,cos(X)). deriv(cos(X),X,-sin(X)). deriv(FG_X, X, DF_G * DG_X) :- FG_X =.. [_, G_X], deriv(FG_X, G_X, DF_G), deriv(G_X, X, DG_X). ``` ```prolog ?- deriv(sin(cos(x)),x,D). D = cos(cos(x))* -sin(x) ? ``` ○ But use *only when strictly necessary*: expensive in time and memory, **HO**. Conversion Between Strings and Atoms (New Atom Creation) - Classical primitive: $\text{name}(A,S)$ - $A$ is the atom/number whose name is the list of ASCII characters $S$ ``` ?- name(hello,S). A = hello ?- name(A,"hello"). A = hello ``` - Ambiguity when converting strings which represent numbers. Example: $\text{name}('1',X), \text{name}(Y,X)$. - In the ISO standard fixed by dividing into two: * $\text{atom_codes}(\text{Atom},\text{String})$ * $\text{number_codes}(\text{Number},\text{String})$ Meta-Logical Predicates - **var(X)**: succeed iff X is a free variable. ```prolog ?- var(X), X = f(a). % Succeeds ?- X = f(a), var(X). % Fails ``` - **nonvar(X)**: succeed iff X is not a free variable. ```prolog ?- X = f(Y), nonvar(X). % Succeeds ``` - **ground(X)**: succeed iff X is fully instantiated. ```prolog ?- X = f(Y), ground(X). % Fails ``` - Outside the scope of first order logic. - Uses: - control goal order, - restore some flexibility to programs using certain builtins. Meta-Logical Predicates (Contd.) – choosing implementations - Example: list length: \[ \text{length}([],0). \text{length}([\_|T],N) :- \text{length}(T,TN), N \text{ is } TN+1. \] - Choosing between two implementations based on calling mode. I.e., implementing \textit{reversibility} ”by hand.” \[ \text{length}(L,N) :- \text{var}(L), \text{integer}(N), \text{create_list}(N,L). \text{length}(L,N) :- \text{nonvar}(L), \text{compute_length}(L,N). \] \[ \text{create_list}(0,[]). \text{create_list}(N,[_|T]) :- N > 0, NT \text{ is } N-1, \text{create_list}(NT,T). \] \[ \text{compute_length}([],0). \text{compute_length}([\_|T],N) :- \text{compute_length}(T,TN), N \text{ is } TN+1. \] - Not really needed: the normal definition of length is actually reversible! - Although reversing traditional list length is less efficient than length_num(N,L) when L is a variable and N a number. Meta-Logical Predicates (Contd.) – choosing implementations • **Example (Contd.):** Choosing between implementations based on calling mode. With more efficient version of `compute_length/2` (using an “acumulating parameter” –see slides on Prolog efficency): ```prolog length(L,N) :- var(L), integer(N), create_list(N,L). length(L,N) :- nonvar(L), compute_length(L,N). create_list(0,[[]]). create_list(N,[_T]) :- N > 0, NT is N-1, create_list(NT,T). compute_length(L,N) :- compute_length_(L,0,N). compute_length_([],N,N). compute_length_([_T],A,N) :- NA is A+1, compute_length_(T,NA,N). ``` Comparing Non-ground Terms - Many applications need comparisons between non-ground/non-numeric terms. - Identity tests: - \(X == Y\) (identical) - \(X \neq Y\) (not identical) ``` ?- f(X) == f(X). % Succeeds ?- f(X) == f(Y). % Fails ``` - Term ordering: - \(X @> Y\), \(X @>=Y\), \(X @< Y\), \(X @=< Y\) (alphabetic/lexicographic order) ``` ?- f(a) @> f(b). % Fails ?- f(b) @> f(a). % Succeeds ?- f(X) @> f(Y). % Implementation dependent! ``` Comparing Non-ground Terms (Contd.) - Reconsider `subterm/2` with non-ground terms ```prolog subterm(Sub,Term) :- Sub == Term. subterm(Sub,Term) :- nonvar(Term), functor(Term,F,N), subterm(N,Sub,Term). ``` where `subterm/3` is identical to the previous definition - Insert an item into an ordered list: ```prolog insert([], Item, [Item]). insert([H|T], Item, [H|T]) :- H == Item. insert([H|T], Item, [Item, H|T]) :- H @> Item. insert([H|T], Item, [H|NewT]) :- H @< Item, insert(T, Item, NewT). ``` - Compare with the same program with the second clause defined as ```prolog insert([H|T], Item, [Item|T]):- H = Item. ``` A minimal set of input-output predicates ("DEC-10 Prolog I/O"): <table> <thead> <tr> <th>Class</th> <th>Predicate</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td><strong>I/O stream control</strong></td> <td>see(File)</td> <td>File becomes the current input stream.</td> </tr> <tr> <td></td> <td>seeing(File)</td> <td>The current input stream is File.</td> </tr> <tr> <td></td> <td>seen</td> <td>Close the current input stream.</td> </tr> <tr> <td></td> <td>tell(File)</td> <td>File becomes the current output stream.</td> </tr> <tr> <td></td> <td>telling(File)</td> <td>The current output stream is File.</td> </tr> <tr> <td></td> <td>told</td> <td>Close the current output stream.</td> </tr> <tr> <td><strong>Term I/O</strong></td> <td>write(X)</td> <td>Write the term X on the current output stream.</td> </tr> <tr> <td></td> <td>nl</td> <td>Start a new line on the current output stream.</td> </tr> <tr> <td></td> <td>read(X)</td> <td>Read a term (finished by a full stop) from the current input stream and unify it with X.</td> </tr> <tr> <td><strong>Character I/O</strong></td> <td>put_code(N)</td> <td>Write the ASCII character code N. N can be a string of length one.</td> </tr> <tr> <td></td> <td>get_code(N)</td> <td>Read the next character code and unify its ASCII code with N.</td> </tr> </tbody> </table> • Other stream-based input-output predicates: <table> <thead> <tr> <th>Class</th> <th>Predicate</th> <th>Explanation</th> </tr> </thead> <tbody> <tr> <td>I/O stream control</td> <td><strong>open</strong>(File,M,S)</td> <td>Open File with mode M and return in S the stream associated with the file. M may be <strong>read</strong>, <strong>write</strong> or <strong>append</strong>.</td> </tr> <tr> <td></td> <td><strong>close</strong>(S)</td> <td>Close the stream ‘Stream’.</td> </tr> <tr> <td>Term I/O</td> <td><strong>write</strong>(S,X)</td> <td>Write the term X on stream S.</td> </tr> <tr> <td></td> <td><strong>nl</strong>(S)</td> <td>Start a new line on stream S.</td> </tr> <tr> <td></td> <td><strong>read</strong>(S,X)</td> <td>Read a term (finished by a full stop) from the stream S and unify it with X.</td> </tr> <tr> <td>Character I/O</td> <td><strong>put_code</strong>(S,N)</td> <td>Write the ASCII character code N on stream S.</td> </tr> <tr> <td></td> <td><strong>get_code</strong>(S,N)</td> <td>Read from stream S the next character code and unify its ASCII code with N.</td> </tr> </tbody> </table> Example: ```prolog write_list_to_file(L,F) :- telling(OldOutput), % Grab current output stream. tell(F), write_list(L), % Write into F. told, % Close. tell(OldOutput). % Reset previous output stream. write_list([]). write_list([X|Xs]) :- write(X), nl, write_list(Xs). ``` - More powerful and format-based input-output predicates are available (see, e.g., `format/2` and `format/3`—Prolog system manuals). - All these input-output predicates are “side-effects”! Pruning Operator: Cut - The “cut” (\(!/0\) is a predicate which, when executed, commits Prolog to all the choices made since the current call to the predicate in which the cut is executed. - Thus, when it is executed, a cut prunes: - all clauses below the clause in which the executed cut appears, and - all alternative solutions to the goals in the clause to the left of the executed cut. It does not affect the search in the goals to the right of the cut. Example: ``` s(1). p(X,Y):- l(X), ... r(8). s(2). p(X,Y):- r(X), !, ... r(9). p(X,Y):- m(X), ... ``` with query `?- s(A),p(B,C)`. If execution reaches the cut (\(!\)): - The second alternative of \(r/1\) is not considered. - The third clause of \(p/2\) is not considered. Pruning Operator: Cut (Contd.) \[ \begin{align*} s(1) & . & p(X,Y) &: - & l(X), \ldots & & r(8). \\ s(2) & . & p(X,Y) &: - & r(X), !, \ldots & & r(9). \\ & & p(X,Y) &: - & m(X), \ldots \end{align*} \] “Types” of Cut - **White** cuts: do not discard solutions. ```prolog max(X,Y,X) :- X > Y, !. max(X,Y,Y) :- X =< Y. ``` They affect neither completeness nor correctness – use them freely. (In many cases the system “introduces” them automatically.) - **Green** cuts: discard correct solutions which are not needed. ```prolog address(X,Add) :- home_address(X,Add), !. address(X,Add) :- business_address(X,Add). ``` ```prolog membercheck(X,[X|Xs]) :- !. membercheck(X,[Y|Xs]) :- membercheck(X,Xs). ``` They affect completeness but not correctness. Necessary in many situations (but beware!). • *Red* cuts: discard solutions which are not correct according to the intended meaning. ◇ Example: ```prolog max(X, Y, X) :- X > Y, !. max(X, Y, Y). ``` *wrong* answers to, e.g., `?- max(5, 2, 2). ``` ◇ Example: ```prolog days_in_year(X, 366) :- leap_year(X), !. days_in_year(X, 365). leap_year(X) :- number(X), 0 is X mod 4. ``` *wrong* answers to, e.g., `?- days_in_year(4, 365), ?- days_in_year(a, D). ``` Red cuts affect completeness and one can no longer rely on the strict declarative interpretation of the program for reasoning about correctness – avoid when possible. Meta-calls and Implementing Higher Order - The meta-call `call(X)` converts a term `X` into a goal and calls it. - When called, `X` must be instantiated to a term, otherwise an error is reported. - Used for meta-programming, especially interpreters and shells. Also for defining negation (as we will see) and implementing higher order. Example: <table> <thead> <tr> <th>q(a).</th> <th>p(X) :- call(X).</th> </tr> </thead> </table> | ?- p(q(Y)). Y = a | Example: | q(a,b). | apply(F,Args) :- G =.. [F|Args], call(G). | |---------|----------------------------------------------| | ?- G=q, apply(G,[Y,Z]). Y = a Z = b | In Ciao the `hiord` package allows writing `G(Y,Z)`; see also the `hiordlib` library. Meta-calls – Aggregation Predicates - Other meta-calls are, e.g., `findall/3`, `bagof/3`, and `setof/3`. - `findall(Term, Goal, ListResults)`: ListResults is the set of all instances of `Term` such that Goal is satisfied - If there are no instances of `Term`, ListResults is `[]` - For termination, the number of solutions should be finite (and enumerable in finite time). likes(bill, cider). likes(dick, beer). likes(tom, beer). likes(tom, cider). likes(harry, beer). likes(jan, cider). ?- findall(X, likes(X,Y), S). S = [bill, dick, tom, tom, harry, jan] ? yes ?- findall(X, likes(X, water), S). S = [] ? yes Meta-calls – Aggregation Predicates (Contd.) - **setof(Term, Goal, ListResults):** ListResults is the ordered set (no duplicates) of all instances of Term such that Goal is satisfied - If there are no instances of Term the predicate fails - The set should be finite (and enumerable in finite time) - If there are un-instantiated variables in Goal which do not also appear in Term then a call to this built-in predicate may backtrack, generating alternative values for ListResults corresponding to different instantiations of the free variables of Goal - Variables in Goal will not be treated as free if they are explicitly bound within Goal by an existential quantifier as in $\hat{Y}\ldots$ (then, they behave as in findall/3) - **bagof/3:** same, but returns list unsorted and with duplicates (in backtracking order). Meta-calls – Aggregation Predicates: Examples likes(bill, cider). likes(dick, beer). likes(harry, beer). likes(jan, cider). likes(tom, beer). likes(tom, cider). ?- setof(X, likes(X,Y), S). S = [dick, harry, tom], Y = beer ? ; S = [bill, jan, tom], Y = cider ? ; no ?- setof((Y,S), setof(X, likes(X,Y), S), SS). SS = [(beer, [dick, harry, tom]), (cider, [bill, jan, tom])] ? ; no ?- setof(X, Y^(likes(X,Y)), S). S = [bill, dick, harry, jan, tom] ? ; no Meta-calls – Negation as Failure - Uses the meta-call facilities, the cut and a system predicate \texttt{fail} that fails when executed (similar to calling \texttt{a=b}). \begin{verbatim} not(Goal) :- call(Goal), !, fail. not(Goal). \end{verbatim} - Available as the (prefix) predicate \texttt{\+1}: \begin{verbatim} \+ member(c, [a,k,l]) \end{verbatim} - It will never instantiate variables. - Using \texttt{\+} twice useful to test without binding variables. E.g., \texttt{\+ \+ X = 1}, checks if \texttt{X} is bound (or can be bound) to 1, without binding \texttt{X} if is free. - Termination of \texttt{not(Goal)} depends on termination of \texttt{Goal}. \texttt{not(Goal)} will terminate if a success node for \texttt{Goal} is found before an infinite branch. - It is very useful but dangerous: \begin{verbatim} unmarried_student(X) :- not(married(X)), student(X). student(joe). marrried(john). \end{verbatim} \begin{verbatim} ?- unmarried_student(X). \rightarrow no \end{verbatim} - Works correctly for \textbf{ground goals} (programmer’s responsibility to ensure this). Meta-calls – Negation as Failure – Ensuring Correctness - We can check that negation is called with a ground term: ``` not(G) :- ground(G), !, \+ G. not(G) :- write('ERROR: Non-ground goal in negation: '), write(G), nl, abort. ``` - Or using assertions: ``` :- pred not(G) : ground(G). not(G) :- \+ G. ``` I.e., we declare that \texttt{G} must be ground when called (field). This will be checked, e.g., dynamically if we turn on run-time checking: ``` :- use_package([rtchecks]). ``` • Cut-fail combinations allow forcing the failure of a predicate — somehow specifying a negative answer (useful but very dangerous!). • Example – testing groundness: fail as soon as a free variable is found. \[ \begin{align*} ground(Term) & :- \text{var}(Term), !, \text{fail}. \\ ground(Term) & :- \\ & \quad \text{nonvar}(Term), \\ & \quad \text{functor}(Term,F,N), \\ & \quad \text{ground}(N,Term). \end{align*} \] \[ \begin{align*} ground(0,T) & . \\ ground(N,T) & :- \\ & \quad N>0, \\ & \quad \text{arg}(N,T,Arg), \\ & \quad \text{ground}(Arg), \\ & \quad N1 \text{ is } N-1, \\ & \quad \text{ground}(N1,T). \end{align*} \] Dynamic Program Modification (I) - `assert/1`, `retract/1`, `abolish/1`, ... - Very powerful: allow run–time modification of programs. Can also be used to simulate global variables. - *Sometimes* this is very useful, but very often a mistake: - Code hard to read, hard to understand, hard to debug. - Typically, slow. - Program modification has to be used scarcely, carefully, locally. - Still, assertion and retraction can be logically justified in some cases: - Assertion of clauses which logically follow from the program. (*lemmas*) - Retraction of clauses which are logically redundant. - Other typically non-harmful use: **simple** global switches. - Behavior/requirements may differ between Prolog implementations. Typically, the predicate must be declared `:- dynamic`. Dynamic Program Modification (II) - Example program: ```prolog relate_numbers(X, Y) :- assert(related(X, Y)). unrelate_numbers(X, Y) :- retract(related(X, Y)). ``` - Example query: ```prolog \[?- related(1, 2).\] \{EXISTENCE ERROR: \ldots\} \[?- relate_numbers(1, 2).\] yes \[?- related(1, 2).\] yes \[?- unrelate_numbers(1, 2).\] yes \[?- related(1, 2).\] no ``` - Rules can be asserted dynamically as well. Dynamic Program Modification (III) - Example program: ``` fib(0, 0). fib(1, 1). fib(N, F):- N > 1, N1 is N - 1, N2 is N1 - 1, fib(N1, F1), fib(N2, F2), F is F1 + F2. ``` ``` lfib(N, F):- lemma_fib(N, F), !. lfib(N, F):- N > 1, N1 is N - 1, N2 is N1 - 1, lfib(N1, F1), lfib(N2, F2), F is F1 + F2, assert(lemma_fib(N, F)). :- dynamic lemma_fib/2. lemma_fib(0, 0). lemma_fib(1, 1). ``` - Compare `fib(24, N)` versus `lfib(24, N)` (adjust the number depending on CPU speed). “Those who cannot remember the past are condemned to repeat it” Meta-Interpreters - **`clause(<head>,<body>)`** - ◊ Reads a clause `head :- body` from the program. - ◊ For facts `body` is `true`. - To use `clause/2` a predicate must be declared `dynamic`. - Simple (“vanilla”) meta-interpreter: ```prolog solve(true). solve((A,B)) :- solve(A), solve(B). solve(A) :- clause(A,B), solve(B). ``` Some sample queries: ```prolog ?- solve(lappend([1,2],[3,4],L)). ?- solve(lappend(X,Y,[1,2,3,4])). ``` - This code also implements backtracking! Note that `clause/2` introduces choice-points since `A` can unify with several clause heads. - Interactions with module system: remember that clauses must be dynamic (and use the `dynamic_clauses` package). Meta-Interpreters: extending the basic meta-interpreter - The basic meta-interpreter code: ```prolog solve(true). solve((A,B)) :- solve(A), solve(B). solve(A) :- clause(A,B), solve(B). ``` can be easily extended to do many tasks: tracing, debugging, explanations in expert systems, implementing other computation rules, ... - E.g., an interpreter that counts the number of (forward) steps: ```prolog csolve( true, 0). csolve( (A,B), N) :- csolve(A,NA), csolve(B,NB), N is NA+NB. csolve( A, N) :- clause(A,B), csolve(B,N1), N is N1+1. ?- csolve(lappend([1,2],[3,4],L),N). ``` Incomplete Data Structures - Example – difference lists: - A pseudo-type: ```prolog dlist(X-Y) :- var(X), !, X==Y. dlist([_|DL]-Y) :- dlist(DL-Y). ``` (Note: just for “minimal” difference lists, and not declarative because of `==/2`) - Allows us to *keep a pointer to the end of the list.* - Allows *appending in constant time:* ```prolog append_dl(B1-E1,B2-E2,B3-E3) :- B3=B1, E3=E2, B2=E1. Or, more compactly: And, actually, no call to `append_dl` is normally necessary! ... But can only be done once (see later). - Also one can build difference (open ended) trees, dictionaries, queues, etc., by leaving variables at the ends (e.g., at the leaves for trees). Playing with difference lists - Create two difference lists \((L_1 \text{ and } L_2)\) and append them \((L_2=X)\) “by hand”: ```prolog ?- L1 = [1,2,3|X], L2 = [4,5|Y], L2=X. L1 = [1,2,3,4,5|Y], L2 = [4,5|Y], X = [4,5|Y] ? yes ``` \(L_1\) contains the resulting difference list \([1,2,3,4,5|Y]\). - Given: \(\text{append_dl}(B1-E1,E1-E2,B1-E2)\) ```prolog ?- append_dl([1,2,3|X]-X,[4,5|Y]-Y,L). L = [1,2,3,4,5|Y]-Y, X = [4,5|Y] ? ``` \(L\) has the resulting (appended) difference list. But note that we have modified the first list: we cannot append to it again. ```prolog ?- append_dl(L-X,[4,5|Y]-Y,[1,2,3,4,5|Z]-Z). L = [1,2,3,4,5|Y], X = [4,5|Y], Z = Y ? ``` Standard qsort (using append) qsort([],[]). qsort([X|L],S) :- partition(L,X,LS,LB), qsort(LS,LSS), qsort(LB,LBS), append(LSS,[X|LBS],S). partition([],_P,[],_P). partition([E|R],P,[E|Left1],Right):- E < P, partition(R,P,Left1,Right). partition([E|R],P,Left,[E|Right1]):- E >= P, partition(R,P,Left,Right1). qsort w/Difference Lists (no append!) - First list $L$ is normal list, second ($SL$-$SLE$) is built as a difference list. - Version using extra arguments and explicit unifications. ```prolog % ?- qsort_dl([5,2,1,3,7,6], SL). qsort_dl(L,SL) :- qsort_dl_([5,2,1,3,7,6],SL-SLE), SLE = []. qsort_dl_([],SLE-SLE). qsort_dl_([X|L],SL-SLE) :- partition(L,X,S,B), qsort_dl_(S,SS-SSE), qsort_dl_(B,BS-BSE), SSE = [X|BS], SL = SS, BSE = SLE. % Partition is the same as before. ``` 53 qsort w/Difference Lists (no append!) - Version using extra arguments, in-place unifications. qsort_dl(L,SL) :- qsort_dl_(L,SL,[]). qsort_dl_([],SLE,SLE). qsort_dl_([X|L],SL,SLE) :- partition(L,X,S,B), qsort_dl_(S,SL,[X|BS]), qsort_dl_(B,BS,SLE). % Partition is the same as before. Parsing (using append and traditional lists) ?- myphrase([t,h,e,’’,p,l,a,n,e,’’,f,l,i,e,s]). myphrase(X) :- article(A), append(A,T1,X), spaces(SP), append(SP,T2,T1), noun(N), append(N,T3,T2), spaces(SPN), verb(V), append(SPN,V,T3). article([a]). article([t,h,e]). spaces([' ']). spaces([' '|Y]) :- spaces(Y). noun([c,a,r]). noun([p,l,a,n,e]). verb([f,l,i,e,s]). verb([d,r,i,v,e,s]). myphrase([t, h, e,‘’p, l, a, n, e,’’f, l, i, e, s], []). myphrase(X, CV) :- article(X, CA), spaces(CA, CS1), noun(CS1, CN), spaces(CN, CS2), verb(CS2, CV). article([a|X], X). article([t,h,e|X], X). spaces([‘’|X], X). spaces([‘’|Y], X) :- spaces(Y, X). noun([c,a,r | X], X). noun([p,l,a,n,e | X], X). verb([f,l,i,e,s | X], X). verb([d,r,i,v,e,s | X], X). Parsing (same, using some string syntax) ?- myphrase("the plane flies",[]). myphrase(X,CV) :- article(X,CA), spaces(CA,CS1), noun(CS1,CN), spaces(CN,CS2), verb(CS2,CV). article("a" || X, X). article("the" || X, X). spaces(" " || X, X). spaces(" " || Y, X) :- spaces(Y, X). noun( "car" || X, X). noun( "plane" || X, X). verb( "flies" || X, X). verb( "drives" || X, X). Parsing (same, using additional syntax: DCGs) - Add syntactic transformation to avoid writing all the auxiliary variables. The result is called a **Definite Clause Grammar** ("DCG"). ```prolog ?- myphrase("the plane flies",[]). or, use the “phrase/2” builtin: ?- phrase(myphrase,"the plane flies"). ``` ```prolog :- use_package(dcg). myphrase --> article, spaces, noun, spaces, verb. article --> "a". spaces --> ". article --> "the". spaces --> ", spaces. noun --> "car". verb --> "flies". noun --> "plane". verb --> "drives". ``` Other actions can be interspersed with the grammar. Raw Prolog can be called (between “{ ... }”) ?- myphrase(NChars,"the plane flies",[]). ?- phrase(myphrase(N),"the plane flies"). :- use_package(dcg). myphrase(N) --> article(AC), spaces(S1), noun(NC), spaces(S2), verb(VC), { N is AC + S1 + NC + S2 + VC}. article(1) --> "a". spaces(1) --> " ". article(3) --> "the". spaces(N) --> " ", spaces(N1), { N is N1+1 }. noun(3) --> "car". verb(5) --> "flies". noun(5) --> "plane". verb(6) --> "drives". Other issues in Prolog (see “The Art of Prolog” and Bibliography) - Repeat loops. ```prolog main(_) :- repeat, read(X), process(X). process(end). process(X) :- display(X), fail. ``` - Exception handling. - Extending the syntax beyond operators: term expansions/macros → packages. - Delay declarations/concurrency. - Operating system interface (and sockets, etc.). - Foreign language (e.g., C) interfaces. - Many other built-ins... - ... ... Some Typical Libraries in Prolog Systems - Most systems have a good set of libraries. - Worth checking before re-implementing existing functionality! - Some examples: <table> <thead> <tr> <th>Arrays</th> <th>Assoc</th> <th>Attributes</th> <th>Heaps</th> </tr> </thead> <tbody> <tr> <td>Lists</td> <td>Term Utilities</td> <td>Ordset</td> <td>Queues</td> </tr> <tr> <td>Random</td> <td>System Utilities</td> <td>Tree</td> <td>UGraphs</td> </tr> <tr> <td>WGraphs</td> <td>Sockets</td> <td>Linda/Distribution</td> <td>Persistent DB</td> </tr> <tr> <td>CLPB</td> <td>CLPQR</td> <td>CLPFD</td> <td>Objects</td> </tr> <tr> <td>GCLA</td> <td>TclTk</td> <td>Tracing</td> <td>Chars I/O</td> </tr> <tr> <td>Runtime Utilities</td> <td>Timeout</td> <td>Xrefs</td> <td>WWW</td> </tr> <tr> <td>Java Interface</td> <td>...</td> <td>...</td> <td>...</td> </tr> </tbody> </table> Some Additional Libraries and Extensions (Ciao) Other systems may offer additional extensions. Some examples from Ciao: - Other execution rules: - Breadth-first, Iterative-deepening, Random, ... - Tabling - CASP (negation with multiple models) - Andorra ("determinate-first") execution, fuzzy Prolog, ... - Interfaces to other languages and systems: - Interfaces to C, Java, JavaScript, Python, LLVM, ... - SQL database interface and persistent predicates - Web/HTML/XML/CGI programming (PiLLoW) / HTTP connectivity / JSON / compilation to JavaScript ... - Interfaces to solvers (PPL, Mathematica, MiniSAT, Z3, Yikes, ...) - Graphviz, daVinci interfaces - Interfaces to Electron, wxWidgets, Tcl/Tk, VRML (ProVRML), ... - Calling Emacs from Prolog, etc. Some Additional Libraries and Extensions (Ciao, Contd.) - Many syntactic and semantic extensions: - Functional notation - Higher-order - Terms with named arguments - records/feature terms - Multiple argument indexing - The script interpreter - Active modules (high-level distributed execution) - Concurrency/multithreading - Attributed variables - Object oriented programming - ... Some Additional Libraries and Extensions (Ciao, Contd.) - Constraint programming (CLP) - rationals, reals, finite domains, ... - CHR (constraint handling rules), GeCode, ... - Assertions: - Regular types, Modes, Determinacy, etc. - Other properties - Run-time checking of assertions - Assertion-based unit tests and automatic test case generation - Compile-time property inference and assertion checking (CiaoPP). - Additional programming support: - Automatic documentation (LPdoc). - Partial evaluation, optimization, parallelization (CiaoPP). - ... --- 64
{"Source-Url": "https://cliplab.org/~logalg/slides/3_prolog_language.pdf", "len_cl100k_base": 13292, "olmocr-version": "0.1.50", "pdf-total-pages": 64, "total-fallback-pages": 0, "total-input-tokens": 113758, "total-output-tokens": 16261, "length": "2e13", "weborganizer": {"__label__adult": 0.00021982192993164065, "__label__art_design": 0.00032138824462890625, "__label__crime_law": 0.00021326541900634768, "__label__education_jobs": 0.0008492469787597656, "__label__entertainment": 6.61015510559082e-05, "__label__fashion_beauty": 8.887052536010742e-05, "__label__finance_business": 0.00018012523651123047, "__label__food_dining": 0.0002236366271972656, "__label__games": 0.000499725341796875, "__label__hardware": 0.0004405975341796875, "__label__health": 0.00017464160919189453, "__label__history": 0.00015842914581298828, "__label__home_hobbies": 7.545948028564453e-05, "__label__industrial": 0.0003132820129394531, "__label__literature": 0.0002267360687255859, "__label__politics": 0.00016069412231445312, "__label__religion": 0.0003802776336669922, "__label__science_tech": 0.010009765625, "__label__social_life": 7.402896881103516e-05, "__label__software": 0.01035308837890625, "__label__software_dev": 0.974609375, "__label__sports_fitness": 0.00015032291412353516, "__label__transportation": 0.0002694129943847656, "__label__travel": 0.0001264810562133789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42319, 0.01691]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42319, 0.53191]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42319, 0.70179]], "google_gemma-3-12b-it_contains_pii": [[0, 58, false], [58, 1046, null], [1046, 1603, null], [1603, 2677, null], [2677, 3695, null], [3695, 4168, null], [4168, 4896, null], [4896, 6023, null], [6023, 6730, null], [6730, 7327, null], [7327, 7898, null], [7898, 8686, null], [8686, 9196, null], [9196, 10190, null], [10190, 11077, null], [11077, 11731, null], [11731, 12138, null], [12138, 12852, null], [12852, 13694, null], [13694, 14524, null], [14524, 14945, null], [14945, 15809, null], [15809, 16431, null], [16431, 17184, null], [17184, 17781, null], [17781, 18305, null], [18305, 19227, null], [19227, 19822, null], [19822, 20286, null], [20286, 21016, null], [21016, 22733, null], [22733, 23926, null], [23926, 24406, null], [24406, 25157, null], [25157, 25359, null], [25359, 25986, null], [25986, 26570, null], [26570, 27259, null], [27259, 27877, null], [27877, 28709, null], [28709, 29171, null], [29171, 30260, null], [30260, 30756, null], [30756, 31389, null], [31389, 32183, null], [32183, 32627, null], [32627, 33223, null], [33223, 33931, null], [33931, 34512, null], [34512, 35241, null], [35241, 35910, null], [35910, 36250, null], [36250, 36756, null], [36756, 37058, null], [37058, 37477, null], [37477, 37858, null], [37858, 38240, null], [38240, 38777, null], [38777, 39345, null], [39345, 39791, null], [39791, 40556, null], [40556, 41334, null], [41334, 41737, null], [41737, 42319, null]], "google_gemma-3-12b-it_is_public_document": [[0, 58, true], [58, 1046, null], [1046, 1603, null], [1603, 2677, null], [2677, 3695, null], [3695, 4168, null], [4168, 4896, null], [4896, 6023, null], [6023, 6730, null], [6730, 7327, null], [7327, 7898, null], [7898, 8686, null], [8686, 9196, null], [9196, 10190, null], [10190, 11077, null], [11077, 11731, null], [11731, 12138, null], [12138, 12852, null], [12852, 13694, null], [13694, 14524, null], [14524, 14945, null], [14945, 15809, null], [15809, 16431, null], [16431, 17184, null], [17184, 17781, null], [17781, 18305, null], [18305, 19227, null], [19227, 19822, null], [19822, 20286, null], [20286, 21016, null], [21016, 22733, null], [22733, 23926, null], [23926, 24406, null], [24406, 25157, null], [25157, 25359, null], [25359, 25986, null], [25986, 26570, null], [26570, 27259, null], [27259, 27877, null], [27877, 28709, null], [28709, 29171, null], [29171, 30260, null], [30260, 30756, null], [30756, 31389, null], [31389, 32183, null], [32183, 32627, null], [32627, 33223, null], [33223, 33931, null], [33931, 34512, null], [34512, 35241, null], [35241, 35910, null], [35910, 36250, null], [36250, 36756, null], [36756, 37058, null], [37058, 37477, null], [37477, 37858, null], [37858, 38240, null], [38240, 38777, null], [38777, 39345, null], [39345, 39791, null], [39791, 40556, null], [40556, 41334, null], [41334, 41737, null], [41737, 42319, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42319, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 42319, null]], "pdf_page_numbers": [[0, 58, 1], [58, 1046, 2], [1046, 1603, 3], [1603, 2677, 4], [2677, 3695, 5], [3695, 4168, 6], [4168, 4896, 7], [4896, 6023, 8], [6023, 6730, 9], [6730, 7327, 10], [7327, 7898, 11], [7898, 8686, 12], [8686, 9196, 13], [9196, 10190, 14], [10190, 11077, 15], [11077, 11731, 16], [11731, 12138, 17], [12138, 12852, 18], [12852, 13694, 19], [13694, 14524, 20], [14524, 14945, 21], [14945, 15809, 22], [15809, 16431, 23], [16431, 17184, 24], [17184, 17781, 25], [17781, 18305, 26], [18305, 19227, 27], [19227, 19822, 28], [19822, 20286, 29], [20286, 21016, 30], [21016, 22733, 31], [22733, 23926, 32], [23926, 24406, 33], [24406, 25157, 34], [25157, 25359, 35], [25359, 25986, 36], [25986, 26570, 37], [26570, 27259, 38], [27259, 27877, 39], [27877, 28709, 40], [28709, 29171, 41], [29171, 30260, 42], [30260, 30756, 43], [30756, 31389, 44], [31389, 32183, 45], [32183, 32627, 46], [32627, 33223, 47], [33223, 33931, 48], [33931, 34512, 49], [34512, 35241, 50], [35241, 35910, 51], [35910, 36250, 52], [36250, 36756, 53], [36756, 37058, 54], [37058, 37477, 55], [37477, 37858, 56], [37858, 38240, 57], [38240, 38777, 58], [38777, 39345, 59], [39345, 39791, 60], [39791, 40556, 61], [40556, 41334, 62], [41334, 41737, 63], [41737, 42319, 64]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42319, 0.04403]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
19df3edb7c543dd03ceba03d139cff352d83f724
Abstract This is a collection of frequently asked questions (FAQ) about the zoo package together with their answers. Keywords: irregular time series, ordered observations, time index, daily data, weekly data, returns. 1. I know that duplicate times are not allowed but my data has them. What do I do? zoo objects should not normally contain duplicate times. If you try to create such an object using zoo or read.zoo then warnings will be issued but the objects will be created. The user then has the opportunity to fix them up – typically by using aggregate.zoo or duplicated. Merging is not well defined for duplicate series with duplicate times and rather than give an undesired or unexpected result, merge.zoo issues an error message if it encounters such illegal objects. Since merge.zoo is the workhorse behind many zoo functions, a significant portion of zoo will not accept duplicates among the times. Typically duplicates are eliminated by (1) averaging over them, (2) taking the last among each run of duplicates or (3) interpolating the duplicates and deleting ones on the end that cannot be interpolated. These three approaches are shown here using the aggregate.zoo function. Another way to do this is to use the aggregate argument of read.zoo which will aggregate the zoo object read in by read.zoo all in one step. Note that in the example code below that identity is the identity function (i.e. it just returns its argument). It is an R core function: A "zoo" series with duplicated indexes ```r > z <- suppressWarnings(zoo(1:8, c(1, 2, 2, 2, 3, 4, 5, 5))) > z 1 2 2 2 3 4 5 5 1 2 3 4 5 6 7 8 Fix it up by averaging duplicates: > aggregate(z, identity, mean) 1 2 3 4 5 1.0 3.0 5.0 6.0 7.5 Or, fix it up by taking last in each set of duplicates: ```r > aggregate(z, identity, tail, 1) 1 2 3 4 5 1 4 5 6 8 ``` Fix it up via interpolation of duplicate times ```r > time(z) <- na.approx(ifelse(duplicated(time(z)), NA, time(z)), na.rm = FALSE) ``` If there is a run of equal times at end they wind up as NAs and we cannot have NA times. ```r > z[!is.na(time(z))] ``` The `read.zoo` command has an `aggregate` argument that supports arbitrary summarization. For example, in the following we take the last value among any duplicate times and sum the volumes among all duplicate times. We do this by reading the data twice, once for each aggregate function. In this example, the first three columns are junk that we wish to suppress which is why we specified `colClasses`; however, in most cases that argument would not be necessary. ```r > Lines <- "1|BHARTIARTL|EQ|18:15:05|600|1 + 2|BHARTIARTL|EQ|18:15:05|600|99 + 3|GLENMARK|EQ|18:15:05|238.1|5 + 4|HINDALCO|EQ|18:15:05|43.75|100 + 5|BHARTIARTL|EQ|18:15:05|600|1 + 6|BHEL|EQ|18:15:05|1100|11 + 7|HINDALCO|EQ|18:15:06|43.2|1 + 8|CHAMBLFERT|EQ|18:15:06|46|10 + 9|CHAMBLFERT|EQ|18:15:06|46|90 + 10|BAJAUTOFIN|EQ|18:15:06|80|100" > library("zoo") > library("chron") > tail1 <- function(x) tail(x, 1) > cls <- c("NULL", "NULL", "NULL", "character", "numeric", "numeric") > nms <- c("","","","time","value","volume") > z <- read.zoo(text = Lines, aggregate = tail1, + FUN = times, sep = "|", colClasses = cls, col.names = nms) > z2 <- read.zoo(text = Lines, aggregate = sum, + FUN = times, sep = "|", colClasses = cls, col.names = nms) > z$volume <- z2$volume > z value volume 18:15:05 1100 217 18:15:06 80 201 If the reason for the duplicate times is that the data is stored in long format then use `read.zoo` (particularly the split argument) to convert it to wide format. Wide format is typically a time series whereas long format is not so wide format is the suitable one for zoo. ```r > Lines <- "Date Stock Price + 2000-01-01 IBM 10 + 2000-01-02 IBM 11 + 2000-01-01 ORCL 12 + 2000-01-02 ORCL 13" > stocks <- read.zoo(text = Lines, header = TRUE, split = "Stock") > stocks IBM ORCL 2000-01-01 10 12 2000-01-02 11 13 ``` 2. **When I try to specify a log axis to plot.zoo a warning is issued. What is wrong?** Arguments that are part of ... are passed to the `panel` function and the default `panel` function, `lines`, does not accept `log`. Either ignore the warning, use `suppressWarnings` (see ?suppressWarnings) or create your own panel function which excludes the `log`: ```r > z <- zoo(1:100) > plot(z, log = "y", panel = function(..., log) lines(...)) ``` 3. **How do I create right and a left vertical axes in plot.zoo?** The following shows an example of creating a plot containing a single panel and both left and right axes. ```r > set.seed(1) > z.Date <- as.Date(paste(2003, 02, c(1, 3, 7, 9, 14), sep = "-")) > z <- zoo(cbind(left = rnorm(5), right = rnorm(5, sd = 0.2)), z.Date) > plot(z[,1], xlab = "Time", ylab = "") > opar <- par(usr = c(par("usr")[1:2], range(z[,2]))) > liines(z[,2], lty = 2) > axis(side = 4) > legend("bottomright", lty = 1:2, legend = colnames(z), bty="n") > par(opar) ``` 4. I have data frame with both numeric and factor columns. How do I convert that to a "zoo" object? A "zoo" object may be (1) a numeric vector, (2) a numeric matrix or (3) a factor but may not contain both a numeric vector and factor. The underlying reason for this constraint is that "zoo" was intended to generalize R's "ts" class, which is also based on matrices, to irregularly spaced series with an arbitrary index class. The main reason to stick to matrices is that operations on matrices in R are much faster than on data frames. If you have a data frame with both numeric and factor variables that you want to convert to "zoo", you can do one of the following. Use two "zoo" variables instead: ```r DF <- data.frame(time = 1:4, x = 1:4, f = factor(letters[c(1, 1, 2, 2)])) zx <- zoo(DF$x, DF$time) zf <- zoo(DF$f, DF$time) ``` These could also be held in a "data.frame" again: ```r DF2 <- data.frame(x = zx, f = zf) ``` Or convert the factor to numeric and create a single "zoo" series: ```r z <- zoo(data.matrix(DF[-1]), DF$time) ``` 5. Why does lag give slightly different results on a "zoo" and a "zooreg" series which are otherwise the same? To be definite let us consider the following examples, noting how both lag and diff give a different answer with the same input except its class is "zoo" in one case and "zooreg" in another: ```r > z <- zoo(11:15, as.Date("2008-01-01") + c(-4, 1, 2, 3, 6)) > zr <- as.zooreg(z) > lag(z) 2007-12-28 2008-01-02 2008-01-03 2008-01-04 12 13 14 15 > lag(zr) 11 12 13 14 15 > diff(log(z)) 2008-01-02 2008-01-03 2008-01-04 2008-01-07 0.08701138 0.08004271 0.07410797 0.06899287 > diff(log(zr)) 2008-01-03 2008-01-04 0.08004271 0.07410797 ``` lag.zoo and lag.zooreg work differently. For "zoo" objects the lagged version is obtained by moving values to the adjacent time point that exists in the series but for "zooreg" objects the time is lagged by deltat, the time between adjacent regular times. A key implication is that "zooreg" can lag a point to a time point that did not previously exist in the series and, in particular, can lag a series outside of the original time range whereas that is not possible in a "zoo" series. Note that lag.zoo has an na.pad= argument which in some cases may be what is being sought here. The difference between diff.zoo and diff.zooreg stems from the fact that diff(x) is defined in terms of lag like this: x-lag(x,-1). 6. How do I subtract the mean of each month from a "zoo" series? Suppose we have a daily series. To subtract the mean of Jan 2007 from each day in that month, subtract the mean of Feb 2007 from each day in that month, etc. try this: ```r > set.seed(123) > z <- zoo(rnorm(100), as.Date("2007-01-01") + seq(0, by = 10, length = 100)) > z.demean1 <- z - ave(z, as.yearmon(time(z))) ``` This first generates some artificial data and then employs \texttt{ave} to compute monthly means. To subtract the mean of all Januaries from each January, etc. try this: \begin{verbatim} > z.demean2 <- z - ave(z, format(time(z), "%m")) \end{verbatim} 7. \textit{How do I create a monthly series but still keep track of the dates?} Create a S3 subclass of "\texttt{yearmon}" called "\texttt{yearmon2}" that stores the dates as names on the time vector. It will be sufficient to create an \texttt{as.yearmon2} generic together with an \texttt{as.yearmon2.Date} methods as well as the inverse: \texttt{as.Date.yearmon2}. \begin{verbatim} > as.yearmon2 <- function(x, ...) UseMethod("as.yearmon2") > as.yearmon2.Date <- function(x, ...) { + y <- as.yearmon(with(as.POSIXlt(x, tz = "GMT"), 1900 + year + mon/12)) + names(y) <- x + structure(y, class = c("yearmon2", class(y))) + } > as.Date.yearmon2 <- function(x, frac = 0, ...) { + if (!is.null(names(x))) return(as.Date(names(x))) + x <- unclass(x) + year <- floor(x + .001) + month <- floor(12 * (x - year) + 1 + .5 + .001) + dd.start <- as.Date(paste(year, month, 1, sep = "-")) + dd.end <- dd.start + 32 - as.numeric(format(dd.start + 32, "%d")) + as.Date(((1-frac) * as.numeric(dd.start) + frac * as.numeric(dd.end), + origin = "1970-01-01") + } \end{verbatim} This new class will act the same as "\texttt{yearmon}" stores and allows recovery of the dates using \texttt{as.Date} and \texttt{aggregate.zoo}. \begin{verbatim} > dd <- seq(as.Date("2000-01-01"), length = 5, by = 32) > z <- zoo(1:5, as.yearmon2(dd)) > z 1 2 3 4 5 > aggregate(z, as.Date, identity) 1 2 3 4 5 \end{verbatim} 8. How are axes added to a plot created using plot.zoo? On single panel plots axis or Axis can be used just as with any classic graphics plot in R. The following example adds custom axis for single panel plot. It labels months but uses the larger year for January. Months, quarters and years should have successively larger ticks. ```r > z <- zoo(0:500, as.Date(0:500)) > plot(z, xaxt = "n") > tt <- time(z) > m <- unique(as.Date(as.yearmon(tt))) > jan <- format(m, "%m") == "01" > mlab <- substr(months(m[!jan]), 1, 1) > axis(side = 1, at = m[!jan], labels = mlab, tcl = -0.3, cex.axis = 0.7) > axis(side = 1, at = m[jan], labels = format(m[jan], "%y"), tcl = -0.7) > axis(side = 1, at = unique(as.Date(as.yearqtr(tt))), labels = FALSE) > abline(v = m, col = grey(0.8), lty = 2) ``` A multivariate series can either be generated as (1) multiple single panel plots: ```r > z3 <- cbind(z1 = z, z2 = 2*z, z3 = 3*z) > opar <- par(mfrow = c(2, 2)) > tt <- time(z) > m <- unique(as.Date(as.yearmon(tt))) > jan <- format(m, "%m") == "01" > mlab <- substr(months(m[!jan]), 1, 1) > for(i in 1:ncol(z3)) { + plot(z3[,i], xaxt = "n", ylab = colnames(z3)[i], ylim = range(z3)) + axis(side = 1, at = m[!jan], labels = mlab, tcl = -0.3, cex.axis = 0.7) + axis(side = 1, at = m[jan], labels = format(m[jan], "%y"), tcl = -0.7) + axis(side = 1, at = unique(as.Date(as.yearqtr(tt))), labels = FALSE) + } > par(opar) ``` or (2) as a multipanel plot. In this case any custom axis must be placed in a panel function. ```r > plot(z3, screen = 1:3, xaxt = "n", nc = 2, ylim = range(z3), + panel = function(...) { + lines(...) + panel.number <- parent.frame()$panel.number + nser <- parent.frame()$nser + # place axis on bottom panel of each column only + if (panel.number %% 2 == 0 || panel.number == nser) { + tt <- list(...)[[1]] + m <- unique(as.Date(as.yearmon(tt))) + jan <- format(m, "%m") == "01" + mlab <- substr(months(m[!jan]), 1, 1) + axis(side = 1, at = m[!jan], labels = mlab, tcl = -0.3, cex.axis = 0.7) + } ``` 9. Why is nothing plotted except axes when I plot an object with many NAs? Isolated points surrounded by NA values do not form lines: ```r > z <- zoo(c(1, NA, 2, NA, 3)) > plot(z) ``` So try one of the following: Plot points rather than lines. ```r > plot(z, type = "p") ``` Omit NAs and plot that. ```r > plot(na.omit(z)) ``` Fill in the NAs with interpolated values. ```r > plot(na.approx(z)) ``` Plot points with lines superimposed. ```r > plot(z, type = "p") > lines(na.omit(z)) ``` Note that this is not specific to zoo. If we plot in R without zoo we get the same behavior. 10. Does zoo work with Rmetrics? Yes. timeDate class objects from the timeDate package can be used directly as the index of a zoo series and as.timeSeries.zoo and as.zoo.timeSeries can convert back and forth between objects of class zoo and class timeSeries from the timeSeries package. ```r > library("timeDate") > tms <- c( "23:12:55", "10:34:02", "08:30:00", "11:18:23") > td <- timeDate(paste(dts, tms), format = "%Y-%m-%d %H:%M:%S") > library("zoo") > z <- zoo(1:4, td) > zz <- merge(z, lag(z)) > plot(zz) > library("timeSeries") > zz ``` > as.timeSeries(zz) GMT z lag(z) 1989-09-28 23:12:55 1 4 1990-02-09 11:18:23 4 2 2001-01-15 10:34:02 2 3 2004-08-30 08:30:00 3 NA > as.zoo(as.timeSeries(zz)) z lag(z) 1989-09-28 23:12:55 1 4 1990-02-09 11:18:23 4 2 2001-01-15 10:34:02 2 3 2004-08-30 08:30:00 3 NA 11. What other packages use zoo? A DEIS dependency means that a package lists `zoo` in the Depends, Enhances, Imports or Suggests clause of their DESCRIPTION file. As of September 27, 2011 there are 65 packages on CRAN with DEIS dependencies on zoo and 207 packages which either have direct DEIS dependencies or a DEIS dependency on a package which in turn has a DEIS dependency on zoo. This suggests that packages that have a DEIS dependency on zoo are themselves popular. If one recursively calculates DEIS dependencies to all depths then 2127 packages on CRAN have direct or indirect DEIS dependencies on zoo. That is over half of CRAN. Below are 74 packages which include those with direct DEIS dependencies as well as packages that are often used with zoo: Some packages depend on zoo indirectly listing such a relationship to a package which in turn has such a dependency on zoo. There are 207 packages which There are 74 other CRAN packages that are or can be used with `zoo` (and possibly more in other repositories): <table> <thead> <tr> <th>Depends</th> </tr> </thead> <tbody> <tr> <td>AER</td> </tr> <tr> <td>Applied Econometrics with R</td> </tr> <tr> <td>BootPR</td> </tr> <tr> <td>Bootstrap Prediction Intervals and Bias-Corrected Forecasting</td> </tr> <tr> <td>DMwR</td> </tr> <tr> <td>Functions and data for 'Data Mining with R'</td> </tr> <tr> <td>FinTS</td> </tr> <tr> <td>MFDF</td> </tr> <tr> <td>Modeling Functional Data in Finance</td> </tr> <tr> <td>Modalclust</td> </tr> <tr> <td>Hierarchical Modal Clustering</td> </tr> <tr> <td>PerformanceAnalytics</td> </tr> <tr> <td>Econometric tools for performance and risk analysis</td> </tr> <tr> <td>RBlomberg</td> </tr> <tr> <td>R/Bloomberg</td> </tr> <tr> <td>RghcnV3</td> </tr> <tr> <td>Global Historical Climate Network Version 3</td> </tr> <tr> <td>StreamMetabolism</td> </tr> <tr> <td>Stream Metabolism-A package for calculating single station metabolism from diurnal Oxygen curves</td> </tr> <tr> <td>TSIame</td> </tr> <tr> <td>Time Series Database Interface extensions for fame</td> </tr> <tr> <td>TShistQuote</td> </tr> <tr> <td>Time Series Database Interface extensions for get.hist.quote</td> </tr> <tr> <td>TSxls</td> </tr> <tr> <td>Time Series Database Interface extension to connect to spreadsheets</td> </tr> <tr> <td>VhayuR</td> </tr> <tr> <td>Vhayu R Interface</td> </tr> <tr> <td>delftfews</td> </tr> <tr> <td>delftfews R extensions</td> </tr> <tr> <td>dyn</td> </tr> <tr> <td>Time Series Regression</td> </tr> <tr> <td>dynlm</td> </tr> <tr> <td>Dynamic Linear Regression</td> </tr> <tr> <td>fda</td> </tr> <tr> <td>Functional Data Analysis</td> </tr> <tr> <td>forecast</td> </tr> <tr> <td>Forecasting functions for time series</td> </tr> <tr> <td>fractalrock</td> </tr> <tr> <td>Generate fractal time series with non-normal returns distribution</td> </tr> <tr> <td>fxregime</td> </tr> <tr> <td>Exchange Rate Regime Analysis</td> </tr> <tr> <td>glogis</td> </tr> <tr> <td>Fitting and Testing Generalized Logistic Distributions</td> </tr> <tr> <td>hydroTSM</td> </tr> <tr> <td>Time series management, analysis and interpolation for hydrological modelling</td> </tr> <tr> <td>lmtest</td> </tr> <tr> <td>Testing Linear Regression Models</td> </tr> <tr> <td>meboot</td> </tr> <tr> <td>Maximum Entropy Bootstrap for Time Series</td> </tr> <tr> <td>mlogit</td> </tr> <tr> <td>multinomial logit model</td> </tr> <tr> <td>party</td> </tr> <tr> <td>A Laboratory for Recursive Partytioning</td> </tr> <tr> <td>quantmod</td> </tr> <tr> <td>Quantitative Financial Modelling Framework</td> </tr> <tr> <td>rdatamarket</td> </tr> <tr> <td>Data access API for DataMarket.com</td> </tr> <tr> <td>sandwich</td> </tr> <tr> <td>Robust Covariance Matrix Estimators</td> </tr> <tr> <td>sde</td> </tr> <tr> <td>Simulation and Inference for Stochastic Differential Equations</td> </tr> <tr> <td>solaR</td> </tr> <tr> <td>Solar Photovoltaic Systems</td> </tr> <tr> <td>spacetime</td> </tr> <tr> <td>classes and methods for spatio-temporal data</td> </tr> <tr> <td>strucchange</td> </tr> <tr> <td>Testing, Monitoring, and Dating Structural Changes</td> </tr> <tr> <td>tawny</td> </tr> <tr> <td>Provides various portfolio optimization strategies including random matrix theory and shrinkage estimators</td> </tr> <tr> <td>termstrc</td> </tr> <tr> <td>Zero-coupon Yield Curve Estimation</td> </tr> <tr> <td>tgram</td> </tr> <tr> <td>Functions to compute and plot tracheidograms</td> </tr> <tr> <td>tripEstimation</td> </tr> <tr> <td>Metropolis sampler and supporting functions for estimating animal movement from archival tags and satellite fixes</td> </tr> <tr> <td>tseries</td> </tr> <tr> <td>Time series analysis and computational finance</td> </tr> <tr> <td>wq</td> </tr> <tr> <td>Exploring water quality monitoring data</td> </tr> <tr> <td>xts</td> </tr> <tr> <td>eXtensible Time Series</td> </tr> </tbody> </table> Enhances <table> <thead> <tr> <th>Package</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>chron</td> <td>Chronological objects which can handle dates and times</td> </tr> <tr> <td>hydroTSM</td> <td>Time series management, analysis and interpolation for hydrological modelling</td> </tr> <tr> <td>lubridate</td> <td>Make dealing with dates a little easier</td> </tr> <tr> <td>tis</td> <td>Time Indexes and Time Indexed Series</td> </tr> </tbody> </table> Imports <table> <thead> <tr> <th>Package</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>fxregime</td> <td>Exchange Rate Regime Analysis</td> </tr> <tr> <td>glogis</td> <td>Fitting and Testing Generalized Logistic Distributions</td> </tr> <tr> <td>hydroGOF</td> <td>Goodness-of-fit functions for comparison of simulated and observed hydrological time series</td> </tr> <tr> <td>openair</td> <td>Tools for the analysis of air pollution data</td> </tr> <tr> <td>rasterVis</td> <td>Visualization methods for the raster package</td> </tr> </tbody> </table> Suggests <table> <thead> <tr> <th>Package</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>MeDiChI</td> <td>MeDiChI ChIP-chip deconvolution library</td> </tr> <tr> <td>RQuantLib</td> <td>R interface to the QuantLib library</td> </tr> <tr> <td>TSAgg</td> <td>Time series Aggregation</td> </tr> <tr> <td>TSMySQL</td> <td>Time Series Database Interface extensions for MySQL</td> </tr> <tr> <td>TSPostgreSQL</td> <td>Time Series Database Interface extensions for PostgreSQL</td> </tr> <tr> <td>TSSQLite</td> <td>Time Series Database Interface extensions for SQLite</td> </tr> <tr> <td>TSdbi</td> <td>Time Series Database Interface</td> </tr> <tr> <td>TSodbc</td> <td>Time Series Database Interface extensions for ODBC</td> </tr> <tr> <td>UsingR</td> <td>Data sets for the text 'Using R for Introductory Statistics'</td> </tr> <tr> <td>Zelig</td> <td>Everyone's Statistical Software</td> </tr> <tr> <td>gsubfn</td> <td>Utilities for strings and function arguments</td> </tr> <tr> <td>latticeExtra</td> <td>Extra Graphical Utilities Based on Lattice</td> </tr> <tr> <td>mondate</td> <td>Keep track of dates in terms of months</td> </tr> <tr> <td>playwith</td> <td>A GUI for interactive plots using GTK+</td> </tr> <tr> <td>pscl</td> <td>Political Science Computational Laboratory, Stanford University</td> </tr> <tr> <td>quantreg</td> <td>Quantile Regression</td> </tr> <tr> <td>tframePlus</td> <td>Time Frame coding kernel extensions</td> </tr> </tbody> </table> Uses or Used with <table> <thead> <tr> <th>Package</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>timeDate</td> <td>Rmetrics date and time functions: timeDate usable with zoo</td> </tr> <tr> <td>grid</td> <td>Graphics infrastructure: use with xyplot.zoo</td> </tr> <tr> <td>its</td> <td>Irregular time series: as.its.zoo, as.zoo.its</td> </tr> <tr> <td>lattice</td> <td>grid-based graphics: use with xyplot.zoo</td> </tr> <tr> <td>timeSeries</td> <td>Rmetrics time series functions: as.timeSeries.zoo, as.zoo.timeSeries</td> </tr> <tr> <td>YaleToolkit</td> <td>Data exploration tools from Yale University: accepts &quot;zoo&quot; input</td> </tr> </tbody> </table> 12. Why does ifelse not work as I expect? The ordinary R ifelse function only works with zoo objects if all three arguments are zoo objects with the same time index. **zoo** provides an `ifelse.zoo` function that should be used instead. The `.zoo` part must be written out since `ifelse` is not generic. ```r > z <- zoo(c(1, 5, 10, 15)) > # wrong !!! > ifelse(diff(z) > 4, -z, z) 2 3 4 1 -5 -10 > # ok > ifelse.zoo(diff(z) > 4, -z, z) 1 2 3 4 NA 5 -10 -15 > # or if we merge first we can use ordinary ifelse > xm <- merge(z, dif = diff(z)) > with(xm, ifelse(dif > 4, -z, z)) 1 2 3 4 NA 5 -10 -15 > # or in this case we could also use orindary ifelse if we > # use fill = NA to ensure all three have same index > ifelse(diff(z, fill = NA) > 4, -z, z) 2 3 4 1 -5 -10 ``` ### 13. In a series which is regular except for a few missing times or for which we wish to align to a grid how is it filled or aligned? ```r > # April is missing > zym <- zoo(1:5, as.yearmon("2000-01-01") + c(0, 1, 2, 4, 5)/12) > g <- seq(start(zym), end(zym), by = 1/12) > na.locf(zym, xout = g) 1 2 3 3 4 5 A variation of this is where the grid is of a different date/time class than the original series. In that case use the `x` argument. In the example that follows the series `z` is of "Date" class whereas the grid is of "yearmon" class: > z <- zoo(1:3, as.Date(c("2000-01-15", "2000-03-3", "2000-04-29"))) > g <- seq(as.yearmon(start(z)), as.yearmon(end(z)), by = 1/12) > na.locf(z, x = as.yearmon, xout = g) 1 1 2 3 Here is a chron example where we wish to create a 10 minute grid: > Lines <- "Time,Value + 2009-10-09 5:00:00,210 + 2009-10-09 5:05:00,207 + 2009-10-09 5:17:00,250 + 2009-10-09 5:30:00,193 + 2009-10-09 5:41:00,205 + 2009-10-09 6:00:00,185" > library("chron") > z <- read.zoo(text = Lines, FUN = as.chron, sep = ",", header = TRUE) > g <- seq(start(z), end(z), by = times("00:10:00")) > na.locf(z, xout = g) (10/09/09 05:00:00) (10/09/09 05:10:00) (10/09/09 05:20:00) (10/09/09 05:30:00) 210 207 250 193 (10/09/09 05:40:00) (10/09/09 05:50:00) (10/09/09 06:00:00) 193 205 185 What is the difference between as.Date in zoo and as.Date in the core of R? zoo has extended the origin argument of as.Date.numeric so that it has a default of origin="1970-01-01" (whereas in the core of R it has no default and must always be specified). Note that this is a strictly upwardly compatible extension to R and any usage of as.Date in R will also work in zoo. This makes it more convenient to use as.Date as a function input. For example, one can shorten this: > z <- zoo(1:2, c("2000-01-01", "2000-01-02")) > aggregate(z, function(x) as.Date(x, origin = "1970-01-01")) 2000-01-01 2000-01-02 1 2 to just this: > aggregate(z, as.Date) As another example, one can shorten ```r > Lines <- "2000-01-01 12:00:00,12 + 2000-01-02 12:00:00,13" > read.zoo(text = Lines, sep = ",", FUN = function(x) as.Date(x, origin = "1970-01-01")) 2000-01-01 2000-01-02 12 13 ``` to this: ```r > read.zoo(text = Lines, sep = ",", FUN = as.Date) 2000-01-01 2000-01-02 12 13 ``` Note to package developers of packages that use zoo: Other packages that work with zoo and define `as.Date` methods should either import `zoo` or else should fully export their `as.Date` methods in their `NAMESPACE` file, e.g. `export(as.Date.X)`, in order that those methods be registered with `zoo`'s `as.Date` generic and not just the `as.Date` generic in `base`. 15. How can I speed up zoo? The main area where you might notice slowness is if you do indexing of zoo objects in an inner loop. In that case extract the data and time components prior to the loop. Since most calculations in R use the whole object approach there are relatively few instances of this. For example, the following shows two ways of performing a rolling sum using only times nearer than 3 before the current time. The second one eliminates the zoo indexing to get a speedup: ```r > n <- 50 > z <- zoo(1:n, c(1:3, seq(4, by = 2, length = n-3))) > system.time({ + zz <- sapply(seq_along(z), + function(i) sum(z[time(z) <= time(z)[i] & time(z) > time(z)[i] - 3])) + z1 <- zoo(zz, time(z)) + }) ``` ```r user system elapsed 0.009 0.000 0.009 ``` ```r > system.time({ + zc <- coredata(z) + tt <- time(z) + zr <- sapply(seq_along(zc), + function(i) sum(zc[tt <= tt[i] & tt > tt[i] - 3])) + z2 <- zoo(zr, tt) + }) ``` user system elapsed 0.003 0.000 0.003 > identical(z1, z2) [1] TRUE Affiliation: zoo Development Team R-Forge: http://R-Forge.R-project.org/projects/zoo/ Comprehensive R Archive Network: http://CRAN.R-project.org/package=zoo
{"Source-Url": "https://cran.ms.unimelb.edu.au/web/packages/zoo/vignettes/zoo-faq.pdf", "len_cl100k_base": 8315, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 31798, "total-output-tokens": 9154, "length": "2e13", "weborganizer": {"__label__adult": 0.000308990478515625, "__label__art_design": 0.0013341903686523438, "__label__crime_law": 0.0005116462707519531, "__label__education_jobs": 0.002765655517578125, "__label__entertainment": 0.0002772808074951172, "__label__fashion_beauty": 0.0002155303955078125, "__label__finance_business": 0.0016775131225585938, "__label__food_dining": 0.0004489421844482422, "__label__games": 0.0009245872497558594, "__label__hardware": 0.0009212493896484376, "__label__health": 0.0003924369812011719, "__label__history": 0.0006494522094726562, "__label__home_hobbies": 0.0004265308380126953, "__label__industrial": 0.0008821487426757812, "__label__literature": 0.0003952980041503906, "__label__politics": 0.0005178451538085938, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.1270751953125, "__label__social_life": 0.0003483295440673828, "__label__software": 0.30615234375, "__label__software_dev": 0.55224609375, "__label__sports_fitness": 0.0003254413604736328, "__label__transportation": 0.0003466606140136719, "__label__travel": 0.0003757476806640625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 26569, 0.10149]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 26569, 0.33297]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 26569, 0.77299]], "google_gemma-3-12b-it_contains_pii": [[0, 1716, false], [1716, 3339, null], [3339, 4898, null], [4898, 5949, null], [5949, 7779, null], [7779, 9589, null], [9589, 11601, null], [11601, 12807, null], [12807, 14106, null], [14106, 18576, null], [18576, 21941, null], [21941, 23190, null], [23190, 24669, null], [24669, 26340, null], [26340, 26569, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1716, true], [1716, 3339, null], [3339, 4898, null], [4898, 5949, null], [5949, 7779, null], [7779, 9589, null], [9589, 11601, null], [11601, 12807, null], [12807, 14106, null], [14106, 18576, null], [18576, 21941, null], [21941, 23190, null], [23190, 24669, null], [24669, 26340, null], [26340, 26569, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 26569, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 26569, null]], "pdf_page_numbers": [[0, 1716, 1], [1716, 3339, 2], [3339, 4898, 3], [4898, 5949, 4], [5949, 7779, 5], [7779, 9589, 6], [9589, 11601, 7], [11601, 12807, 8], [12807, 14106, 9], [14106, 18576, 10], [18576, 21941, 11], [21941, 23190, 12], [23190, 24669, 13], [24669, 26340, 14], [26340, 26569, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 26569, 0.24314]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
eb40e0c1e67e068d3299914a7b0a11dee0d52d24
FAST: Friends Augmented Search Techniques — System Design & Data-Management Issues Christian von der Weth DERI, National University of Ireland, Galway Email: christian.vonderweth@deri.org Anwitaman Datta Nanyang Technological University (NTU), Singapore Email: anwitaman@ntu.edu.sg Abstract—Improving web search solely based on algorithmic refinements has reached a plateau. The emerging generation of searching techniques tries to harness the “wisdom of crowds”, using inputs from users in the spirit of Web 2.0. In this paper, we introduce a framework facilitating friends augmented search techniques (FAST). To that end, we present a browser add-on as front end for collaborative browsing and searching, supporting synchronous and asynchronous collaboration between users. We then describe the back end, a distributed key-value store for efficient information retrieval in the presence of an evolving knowledge base. The mechanisms we explore in supporting efficient query processing for FAST are applicable for many other recent Web 2.0 applications that rely on similar key-value stores. The specific collaborative search tool we present is expected to be an useful utility in its own right and spur further research on friends augmented search techniques, while the data-management techniques we developed are of general interest and applicability. Keywords: social search, collaboration, browser add-on, key-value store, distributed information retrieval I. INTRODUCTION We believe that the systematic exploitation of social networks holds the key to the next generation of search techniques, complementing the traditional algorithmic solutions that power current web search engines. Specifically, we envision friends augmented search techniques which leverage on both explicitly declared social networks as well as implicit ones, determined based on shared interest or expertise, and support (a-)synchronous collaboration. Several existing systems leverage on information sharing and user collaboration. For instance, Q&A systems (e.g., YAHOO! ANSWERS), bookmark sharing sites (e.g., DELICIOUS) and the recent integration of BING search with FACEBOOK for search result personalization. Their existence demonstrate the potential and need for moving towards more social approaches to information sharing and search. While similar in spirit, we propose mechanisms following a different paradigm, and independently from service providers. Instead of expecting users to visit a specific site to look for information, we use a browser add-on to facilitate information search and sharing based on whichever sites the user visits. This work has been done while Christian von der Weth was working at the Nanyang Technological University (NTU), Singapore. The demand for quickly finding the right information on the web has spurred (and still spurs) the success of search engines like GOOGLE and BING. Search engines apply sophisticated algorithms to crawl the web and to analyze the content of web pages as well as the link structure of the web. Although traditional search engines are and will be an important mechanism for finding information, algorithm-driven improvements have reached a relative saturation. They are also inadequate to deal with some apparent challenges: (a) While the returned results are correct in terms of matching the keywords, the content of the result pages may be wrong, outdated or of limited relevance for the user. (b) Using search engines is a non-trivial task. Particularly for a novice user, identifying meaningful keywords and quickly assessing the relevance of result pages is difficult [1], [2]. (c) Even with the same keywords, a result page can have a different relevance for different users. Approaches towards personalization exploit additional information like user profiles or user behavior; [3] gives a good overview. However, the access to such additional information is restricted to the specific search engine. (d) Search results are determined and ranked using algorithms which often do not account for correctness or credibility of the content. EXAMPLE 1: Consider a user searching/browsing for a medical treatment for a sore throat. Already the first 20 results of popular search engines range from various homespun remedies posted in online forums, Q&A systems, personal websites or even YOUTUBE, to exact but varying medications on professional(-looking) websites, including intentional or honestly mistaken wrong advices. For users without a medical background knowledge, it is difficult to assess the relevance and correctness of the advices. In Section III we describe COBS (collaborative browsing and search), the front end to facilitate friends augmented search, along with the technical challenges in realizing it. We have demonstrated its usability and explored some of the human computer interface issues [4]; here we focus on the technical challenges in realizing COBS and the directly associated (tag) data management issues. Like many recent Web 2.0 applications, FAST principally relies on a custom-made distributed key-value store for managing a bulk of the data. While various aspects of key-value stores have been widely investigated over the last decade, we focus on the description of novel techniques employed for efficient query processing of tagged data using a key-value store, taking into account the peculiarities of the workloads for FAST and other typical Web 2.0 applications; see Section IV. Our approach is of universal relevance, beyond FAST. In Section V we evaluate the front end COBS, as well as the back end data store, thus making a holistic evaluation of our framework for friends augmented search techniques. II. RELATED WORK Web 2.0 platforms. At a high level, the differentiating aspects of FAST with respect to current Web 2.0 services such as DELICIOUS, REDDIT, YAHOO! ANSWERS or the recent tie up between MICROSOFT’S BING and FACEBOOK are as follows: User interactions and the provided information are determined based on where users naturally are online, instead of restricting or requiring them to visit a specific site; and the knowledge base driving FAST is open to be used and refined by one and all, and independent of service providers. Thus, FAST naturally complements (rather than competes with) such existing services. Collaborative browsing and searching. Browsing and searching the internet is primarily a single-user task. Collaboration among users is typically based on mechanisms like e-mail or instant messenger. Several approaches that enhance browser capabilities towards collaboration exist. [5], [6] are systems for co-located collaboration, i.e. several users gathering around one computer. [7], [8] focus on the collaboration between users that already know each other, e.g. family members or co-workers. [9] supports synchronous communication, i.e. not storing permanent data. A recent initiative, the GOSSPLE [10] project, has similar overall aims as FAST, but the way of achieving the same are very different. GOSSPLE relies on gossip based information aggregation techniques. In FAST, the basic premise of collaboration is based on a shared location (i.e., users visiting the same website) on the Web, be it for a presence-driven synchronous collaboration, or using meta-information about the location shared among users asynchronously. Distributed information retrieval. Distributed inverted indices map terms, e.g. tags, to documents, e.g. web pages, containing that term in order to facilitate information retrieval. Multi-term queries – which represents the majority of user queries – are evaluated by merging the corresponding list of documents for each query term [11], [12]. Although techniques to reduce the bandwidth consumption, like Bloom Filter [11], exist, the costs for multi-term searches using single-term inverted indexes are generally very high [13]. As a result, approaches utilizing multi-term inverted indexes have been proposed, e.g. [14], [15]. Given a document with \( n \) terms, the number of possible term combinations is in \( O(2^n) \). Thus, the limitation to a meaningful subset of term combinations is a crucial part of the proposed systems. Existing approaches assume static documents, i.e. the set of terms for an document does not alter. However, in Web 2.0 applications in general, and also in FAST, the set of tags of a resource changes over time. With that, not only the size of the index but also the bandwidth needed to propagate updates to the index needs to be taken into account. III. COBS: COLLABORATIVE BROWSING AND SEARCH FAST has two logical components, the front end (COBS), implemented as a browser add-on\(^1\), and the back end information system. Being implemented as a browser add-on, FAST is independent from features of specific websites, and also available for the "old web". The add-on comprises a toolbar and a sidebar, providing different functionalities. Current features. In the following we outline the current features of the COBS add-on and how user may benefit in the context of our motivating Example 1: Page rating. Users can rate each web page, uniquely identified by its URL, by means of a 5-star rating scheme. A second 5-star scale displays the average of rating of a page. This allows user to quickly assess the quality of the content of a page, e.g. about the effectiveness of a home remedy. Page tagging. Using a small text box in the toolbar, a user can add tags to a web page. In addition, each tag has a rating value, initially set to 0. Now other users can upvote (+1) or downvote (−1) a tag to increase or decrease its overall rating. The intuition is that highly rated tags describe a web page or its content better that tags with a low rating. Creating links. The toolbar maintains a history holding the last 20 visited pages. Users can create links between visited pages by drag&drop between the entries of the history, e.g., a link from the website of a medicament to a page that addresses the side effects of the product. User-generated links can serve as input for network analysis techniques. Again, users can up- or downvote the links of others. Discussion board. The discussion board is a forum-like commenting system where users can post comments and reply to them. Each board is identified by a URL’s domain and path. Thus, all users visiting the same website, e.g. looking for a treatment, have access to the same board. A discussion board is “in situ”, i.e., the discussion is tied to the discussed content of a page. The communication between users is primarily asynchronous: users can visit a same page regularly to access the corresponding board. Online chat. The online chat allows users to communicate with each other in real-time. Like for discussion boards, each chat is assigned to a specific domain and path of a URL, showing all visitors of the same site, and thus tied to the discussed content. The intuition is, in analogy with meeting like-minded people at a physical space (e.g., museum, concert, etc.), that people with an apparently similar interest or expertise “meet” on the same web page. \(^1\)Available for download at http://code.google.com/p/socialcobs/. **Guided browsing.** The concept of guided browsing is particularly interesting for collaboration between a novice user and an expert, e.g., a user with a medical background, where the expert actively guides the novice while browsing for information. For this, a user sends a request to another user to act as guide. If the guide accepts, the URL of each page the guide visits is displayed in the follower’s browser. By our definition, (a) users can follow only one guide, (b) a guide can have several followers, and (c) a follower can be a guide for others. The latter may result in long forwarding chains, incl. cycles which, however, the add-on detects. **Web of trust.** Users can maintain a list of users they deem trustworthy collaborators, e.g., for making good contributions or for helping others. The web of trust helps users to filter additional information about a web page and serves as input for various analyzing techniques, e.g., social network analysis or collaborative filtering to recommend interesting web pages or new potentially trustworthy collaborators. **Challenges in major design decisions.** While most features of the COBS add-on are rather well-known, their integrated implementation to be independent from a specific website poses new challenges. **Appearance and usability.** Existing discussion boards, commenting systems, etc., are designed according to the layout of the containing website. To be independent from specific sites, such an integrated layout is not possible. Although our focus is less on the UI design of the add-on, to make it an accepted tool requires some consideration in this respect. In order not to clash with the layout of websites, we favor a plain and neutral design. Further, we stick to best practice techniques like a 5-star rating scheme for web pages or the simple up/downvote scheme for contributions. **Incentives to collaborate.** The effectiveness of COBS depends on the willingness of the users to collaborate. From an economic perspective users are willing to contribute if their perceived benefit in doing so outweighs their perceived costs. To quantify the benefit is difficult, since it is highly subjective and includes emotional aspects. This particularly holds for indirect collaboration where users do not immediately but in the long run benefit from their contributions. At this stage of our work, we focus on minimizing the costs, i.e., the effort to make contributions. Therefore, adding ratings, tags and links can be performed in a matter of seconds. **Handling query strings.** Often the content of a web page is identified by a query string composed of a series of field-value pairs, e.g., the field `v` holds the identifier of the shown video on www.youtube.com. But the query string may contain fields that do not affect the content, e.g., the `fmt` field in YOUTUBE URLs specifies the quality of videos. Since both the usage and names of fields in query strings are not standardized, it is impossible to reliably identify the relevant fields which specify the content of a web page. Thus, it is not obvious how to assign a user contribution (tag, rating, etc.) to a page or how to decide when two users have access to the same chat. Our current approach is as follows: We assign user contributions to pages identified by their complete URL, including the query string. Adding contributions can be done very quickly, compensating the risk that, e.g., a rating does depend on a non-relevant field in the query string. The discussion board or online chat require a sufficient number of users. Here the risk of considering non-relevant parameters is much more pronounced. Thus, we group visitors of web pages with the same domain and path to the members of a discussion board or online chat. **Integration with distributed back end.** The handling of URL query strings directly affects the integration of the add-on front end with the back end based on a key-value store. In FAST, the most important request is to get all related information about a page. We therefore group information into data objects according to their assignment to a web page. The first group contains all user contributions. Thus, we merge all tags, ratings, etc. about a page into one data object and use the complete URL (incl. query string) as key. The second group contains the comments and chat messages, and the path and domain of the URL as key to store the data. However, in FAST various informations needs refer more than one specific web page, e.g. a keyword-based search to get all pages featuring a specific set of tags. To evaluate non-URL-centric queries efficiently, we deploy the concept of a distributed inverted index. Since multi-term queries represent the majority of user queries (cf. Section V), we favor a multi-term inverted index, i.e. storing combinations of terms/tags as key in the index. Figure 1 illustrates the approach. Given a page with \( n \) tags, the index potentially grows in \( O(2^n) \). Since in FAST the set of tags of a page may change over time, not only the size of the index but also the bandwidth needed to propagate updates to the index is an issue. **IV. Data Management Over a Key-Value Store** FAST relies on a custom-made distributed key-value store. To support keyword-based searches, FAST features a multi-term inverted index. Our approach to cope with exponential growth of the index is two-fold. Firstly, our analysis of a real-world query log (see Section V-B) shows that the average number of query terms is 2.43. Thus, storing large tag sets is not meaningful since they would very rarely be queried. We therefore limit the maximum size of tag sets, denoted by \( s_{max} \). Let \( t_p \) be the number of tags of A page $p$, then the number of tag combinations for $p$ is in $O(t_p^{s_{max}})$, where, in practice, the value for $s_{max}$ tends to be small. Secondly, we also evaluated the distribution of term combinations of various sizes in the query log, see Figure 2. All sizes yield power law relationships, i.e. most term combinations are very rarely queried while only a few combination are very frequently queried. We therefore aim for a query-driven optimization, storing only frequent term combinations (see Section IV-B). We distinguish between single-term keys derived from single terms, and multi-term keys derived from a combination of terms. We assume that all single-term keys are available in the index. A more thorough and focused discussion on the distributed key-value store and our multi-term indexing techniques can be found at [16]. ### A. Query Processing The query processing algorithm exploits the current state of the inverted index. In general, if a query comprises two terms or more there are several ways to process the query. **Example 2**: Let $q = \{t_1, t_2, t_3, t_4\}$ be a query containing four terms. With $s_{max} = 3$ the following set keys can be derived from $q$: gray marked keys are available in the index: - $|k| = 3: \{t_1t_2t_3\}, \{t_1t_2t_4\}, \{t_1t_3t_4\}, \{t_2t_3t_4\}$ - $|k| = 2: \{t_1t_2\}, \{t_1t_3\}, \{t_1t_4\}, \{t_2t_3\}, \{t_2t_4\}, \{t_3t_4\}$ - $|k| = 1: \{t_1\}, \{t_2\}, \{t_3\}, \{t_4\}$ Possible subsets of available keys to answer query $q$ are, e.g., $\{\{t_1, t_2\}, \{t_1, t_3, t_4\}\}$ or $\{\{t_1\}, \{t_2, t_3\}, \{t_4\}\}$. **Algorithm.** Algorithm 1 shows the steps for processing a query $q$. If $q$ contains $\leq s_{max}$ terms we access the inverted index using $q$ as key (Line 1-4). If not available or the number of query terms is larger than $s_{max}$ we compute all relevant keys (derived from all possible term combinations up to size $s_{max}$) for $q$ (Line 5). Next, we retrieve for each available key the size of its inverted list (Line 7-10). Only if no partial result size is 0, we proceed; otherwise we return an empty result (Line 11-12). From the set of available keys $K_q^{avail}$ we derive the ordered list of keys that specifies the order of index accesses (Line 15; described in next paragraph). Finally, we access the index to retrieve the actual data and to compute the intersection for the final result. **Algorithm 1: processQuery(q)** <table> <thead> <tr> <th>Input: query $q = {t_1, t_2, ..., t_4}$, Output: query result</th> </tr> </thead> <tbody> <tr> <td>1 if $n \leq s$ then</td> </tr> <tr> <td>2 $\text{result} \leftarrow \text{getResultFromCache}(q)$</td> </tr> <tr> <td>3 if $\text{result} \neq \text{null}$ then</td> </tr> <tr> <td>4 return result</td> </tr> <tr> <td>5 $K_q \leftarrow \text{computeSubsetKeys}(q)$</td> </tr> <tr> <td>6 $K_q^{avail} \leftarrow \emptyset$</td> </tr> <tr> <td>7 foreach $k \in K_q$ do</td> </tr> <tr> <td>8 $\text{sizes}[k] \leftarrow \text{getResultSize}(k)$</td> </tr> <tr> <td>9 if $\text{sizes}[k] \neq \text{null}$ then</td> </tr> <tr> <td>10 $K_q^{avail} = K_q^{avail} \cup k$</td> </tr> <tr> <td>11 if $\exists k \in K_q^{avail} : \text{size}[k] = 0$ then</td> </tr> <tr> <td>12 return $\emptyset$</td> </tr> <tr> <td>13 $L_{access} \leftarrow \text{getKeyAccessList}(q, K_q^{avail}) ; \text{result} \leftarrow \emptyset$</td> </tr> <tr> <td>14 foreach $k \in L_{access}$ do</td> </tr> <tr> <td>15 $\text{result} \leftarrow \text{result} \cap \text{getResult}(k)$</td> </tr> <tr> <td>16 return result</td> </tr> </tbody> </table> **Algorithm 2: getKeyAccessList(q, K)** <table> <thead> <tr> <th>Input: set of keys $K$, Output: list $L_k$ of keys</th> </tr> </thead> <tbody> <tr> <td>1 $K \leftarrow K \setminus {k_1 \in K \mid \exists k \in K : k \subset k_1}$</td> </tr> <tr> <td>2 $L \leftarrow \emptyset ; L.\text{add}(\arg \min_{k \in K} \text{size}[k])$</td> </tr> <tr> <td>3 while $\bigcup_{k \in L} k \neq K$ do</td> </tr> <tr> <td>4 $K_{max_{coverage}} \leftarrow \arg \max_{k \in K}</td> </tr> <tr> <td>5 $L.\text{add}(\arg \min_{k \in K_{max_{coverage}}} \text{size}[k])$</td> </tr> <tr> <td>6 return $L$</td> </tr> </tbody> </table> The optimization goal is to minimize size of transferred data. To find the optimal subset of keys and their ordering would require complete knowledge, particularly about the expected size of the intersection of two or more partial results. This would require the costly maintenance of statistics over the data in the inverted index, which are typically not available in distributed systems. We therefore favor a heuristic to determine the set and order of keys to access the index; see Algorithm 2. Firstly, we remove redundant keys from $K_q^{avail}$ (Line 1); a key $k_i \in K_q^{avail}$ is redundant if there is a key $k_j \in K_q^{avail}$ and $k_i \subset k_j$; e.g., if $K_q^{avail} = \{k_1 = \{t_1, t_4\}, k_2 = \{t_1, t_3, t_4\}\}$ we can remove $k_1$ since $k_2$ already covers all terms of $k_1$. We then generate $L$ as list of keys to access the index. We initialize $L$ with the key having the shortest inverted list (Line 2). We then iteratively add keys to $L$ that maximize $L$’s coverage of $q$ until $L$ covers $q$, i.e. all terms in $q$ are represented in at least one key in $L$ (Line 3-5). If several keys maximize the coverage, we add the one with the shortest inverted list. **Cost analysis.** The retrieval algorithm considers all relevant keys up to size $s_{max}$ for a given query $q$ to access the index. Thus the algorithm performs $\left(\sum_{j=1}^{s_{max}} \binom{q}{j}\right)$ accesses to the index. In practice, however, this polynomial growth has only a limited impact on the performance. Firstly, as our analysis shows, the value for $|q|$ is rather small ($\sim 2.4$ on average) and a reasonable value for $s_{max}$ is with 3 or 4 also small. Secondly, the \( O(|q|^*_{\text{max}}) \) index accesses are only required to retrieve the length of inverted lists, and not the lists themselves. The actual size of the data transferred, e.g. in terms of required bandwidth, is thus very small. ### B. Index Maintenance The maintenance of the inverted index comprises two major tasks: (a) suspending and resuming of keys depending on their popularity, (b) handling of updates on the tag data. **Suspending and resuming keys.** The inverted index stores only the inverted lists of popular keys, where the popularity of a key \( k \) is derived by the frequency how often \( k \) is requested during query processing. If a key \( k \) becomes unpopular, we suspend \( k \), i.e. we delete \( k \)'s inverted list and mark \( k \) as unavailable for processing queries. If a suspended or new key \( k \) becomes popular, we resume \( k \). Resuming a key \( k \) involves retrieving its corresponding inverted list which translates to performing a query for \( k \) (cf. Algorithm 1) and storing the result as \( k \)'s inverted list. As last step, we mark \( k \) as available again. To measure the popularity, we provide each key \( k \) with a bit vector \( B_k \) of length \( \ell \). Every time \( k \) is requested, we first set \( B_k := B_k \gg 1 \), i.e. we shift the bit vector for \( k \) one bit to the right, and then set \( B_k := B_k \mid 2^\ell \), where \( \ell \) performs a bitwise inclusive OR operation. To implement the timely decay of a key \( k \)'s popularity, we periodically, after time \( \Delta_{\text{decay}} \), set \( B_k := B_k \gg 1 \). With that, the number of set bits in \( B_k \) represents the popularity of \( k \). **Example 3:** The following figure shows a bit vector \( B_k \) both after a request for \( k \) and after a periodically shifting. \[ B_k = 0100011 \quad \rightarrow \quad B_k = 1010011 \] While each periodic shifting decreases the number of set bits, a request for \( k \) increases the number or keeps it. Beside vector length \( \ell \) and interval \( \Delta_{\text{decay}} \), further relevant parameters are (a) \( b^{\text{res}} \) as the minimum number of set bits in \( B_k \) to resume \( k \) and (b) \( b^{\text{sep}} \) as the number of set bits in \( B_k \), when falling below, to suspend \( k \). Resuming keys adds to the workload for processing user queries. However, depending on the choice of the values for these four parameters, we expect resuming keys to be a much more infrequent event than evaluating user queries. **Handling updates of tags.** Updating a resource (here, a web page) by adding or deleting a tag must be propagated to the inverted index. To propagate each update to all relevant keys, would result \( O(t_{\text{max}}^* s_{\text{max}}) \) update messages. While with this approach the index is always up to date, it is not suitable for high update rates like we observed in DELICIOUS and FLICKR. We therefore propose an update mechanism which relaxes the guarantee of the timeliness of the index but resulting in a significant decrease of bandwidth consumption. In a nutshell, we propagate the information about a new or deleted tag only to the corresponding single-tag key in the inverted index. We further update only available multi-term keys periodically. To do so, we propose incremental update queries, where the results only contain the relevant changes, i.e. the tags to be added or to be deleted, for a multi-term key's inverted list. In the following, we present our update mechanism in detail. **Extensions to the inverted index.** To incrementally update multi-term keys, nodes have to distinguish between resources that have already been propagated to multi-term keys and both newly added and deleted resources. Thus, we cannot delete resources immediately from a single-term key’s inverted list, but mark them as deleted. We assign a timestamp to each resource in the inverted list of single-term keys, indicating when it has either been added or marked as deleted. Secondly, we assign a timestamp to each multi-term key, indicating the time of its last update. Thus, for a multi-term key \( k_m \), we can identify all resources in the inverted lists of all single-term keys \( k_i, \forall i : k_i \subset k_m \), that have been added or deleted after the last update of \( k_m \). Regarding the deletion of marked resources, we define \( \Delta_{\text{update}} \) as the maximum period of time before updating a multi-term key. Thus, after a time of \( \Delta_{\text{update}} \), starting from the time a resource \( r \) has been marked as deleted, we can safely delete \( r \) from the inverted list. **Incremental updates of keys.** The rationale is to only transfer the latest changes in the inverted lists of single-term keys to evaluate the necessary changes required to update multi-term keys. Latest changes in an inverted list refer to the set of resources added or marked as deleted after the last update of a multi-term key. To further formalize the concept of incremental update queries, let \( R_{k_i}^B \) be the set of resources in the inverted list of key \( k_i \) that are marked as deleted; \( R_{k_i}^C \) contains all resources not marked. Additionally, let \( ts(r) \) be the timestamp when a resource was added or marked as deleted in an inverted list, and \( ts(k) \) the timestamp of the last update of a key \( k \). We then can define \( R_{k_i|k_j}^{BG} = \{ r \in R_{k_i}^B \mid ts(r) > ts(k_j) \} \) as set of added resources in \( k_i \)'s inverted list with a timestamp older than the timestamp of a key \( k_j \); analogously we define \( R_{k_i|k_j}^{CG} = \{ r \in R_{k_i}^C \mid ts(r) > ts(k_j) \} \). With these definitions, Figure 3 shows the involved steps for an incremental update of a 2-term key. Extending the update process for keys of size \( > 2 \) is straightforward. Consider a multi-term key \( k_m \) of size \( s \) and the corresponding single-term keys \( k_1, k_2, ..., k_s \subset k_m \). The basic mechanism is that the changes of each \( k_i \)'s inverted list are successively incorporated into the intermediate results, before being sent to update \( k_m \). V. EVALUATION We present our experiences with the COBS browser ad-on, and evaluate the performance of our multi-term inverted index as distributed back end for FAST. Thus, given a user can be a guide and follower at the same time. experiences; we installed the COBS add-on on 20 PCs. guided browsing. In the following we present some of our contributions (tags, links, comments) and user data (web of trust) and (b) providing the server component for the chat protocol. The chat protocol is required the online chat and for the guided browsing. Evaluating the overall usability of the add-on is more a HCI (human-computer interaction) issue, and is thus beyond the scope of this paper [4]. Here, we focus on enabling systems issues. From a systems perspective, the most interesting aspect is the concept of guided browsing. In the following we present some of our experiences; we installed the COBS add-on on 20 PCs. For guided browsing, one guide can have several followers and a user can be a guide and follower at the same time . Thus, given n users (u1, u2, ... , un), two extreme cases of guiding relationships can occur: (a) a star network, i.e. one guide and n − 1 followers, (b) all user form a forwarding chain of length n, i.e. u1 follows u2, u2 follows u3, ... , un−1 follows un. For both cases we evaluated the average time required for a forwarded URL to reach all followers. Figure 4 confirm our expectations. For forwarding chains, the time when the last follower receives the URL is directly proportional to the length of the chain. For the star network, the time also increases with the number of followers, but less pronounced compared to a chain. Once the guide has forwarded a URL to all followers, loading the URL in the browsers of all followers is done in parallel. We also investigated some kind of worst case scenario in the context of guided browsing. Supporting forwarding chains may result in forwarding cycles. As a counter- measure, our add-on stops forwarding a received URL if this URL is currently loaded in the browser. However, if two users in a forwarding cycle load a new (different) page at the same time, both corresponding URLs are forwarded within the cycle, resulting in a cascade of alternating reloading of the URLs in all users’ browser windows. While in theory this may last forever, in practice one URL eventually “overtakes” the other one and our cycle detection takes effect. Our tests showed that the time for detecting such cascades is very hard to predict; over repeated experiments, we encountered times from several seconds up to some minutes until the forwarding finally comes to a halt. Even the number of users in the forwarding chain is not a meaningful parameter to estimate this time a-priori. Since the add-on cannot distinguish between a genuine forward of a new URL and a cycling URL, each mechanism to counter this behavior will limit the current flexibility of guided browsing. However, we dem the occurrence of such (large) cycles as a very rare event, and end-users can also manually intervene. B. Multi-Term Inverted Index We next report the performance of our multi-term inverted index using a key-value store as distributed back end infra- structure for FAST based on trace-driven experiments. Used data sets and preliminary steps. We used publicly available tag data sets from DELICIOUS and FLICKR, both obtained in 2006. Table I shows their basic characteristics. Note that number of actions per minute represents just an estimation for the lower bound, since the data does not reflect actions like the (repeated) deletion and re-insertion of tags. Acquiring query logs is challenging. Due to privacy concerns, service providers do not make their query logs public. Synthetically created query histories are, in general, inapt to pattern the frequency, popularity, etc. of queries in real-world systems over time. We therefore use the AOL query log [2] which is, to the best of our knowledge, the only real-world query log of reasonable size, containing mainly English queries. The log contains ∼28.8 Mio. queries, and was collected in the period from March to May, 2006. Assumptions and evaluation method. We assume a distributed key-value store (like used by GOOGLE [17], FACEBOOK [18], AMAZON [19]) for managing the inverted index. We ignore node failures; particularly for single-term keys, we assume that they are always available. Since we do not consider locality-preserving data placement strategies, we assume the worst case, i.e. a sufficiently large number of nodes so that all keys for processing a query or for propagating an update reside on different nodes. We measure three parameters to evaluate the system performance: Number of contacted keys (CK). Parameter CK represents all single accesses to keys in the inverted index. Number of invoked keys (IK). As subset of CK, the IK is the number keys whose inverted list is read while performing queries, updates or resuming keys. Number of transferred resources (TR). The most relevant parameter in terms of total bandwidth required is TR representing the number of resources that are actually transferred for processing queries, updates and resuming keys. With these parameters and our assumption of a sufficiently large number of nodes, our results are independent from the actual number of nodes in the systems. In other words, adding further nodes would have no impact on the results. We evaluate our approach using multi-term keys, henceforth denoted by MTK, against the naive one based solely on single-term keys (STK). To make the results comparable between each other, we compute the relative differences between MTK and STK, where we normalize the load for the STK to 100%. Since the processing of single-term queries is identical for STK and MTK, we use only queries with more than one term throughout our experiments. We performed all experiments on the DELICIOUS and FlickR data set, using the corresponding adjusted query logs. While the absolute figures may vary, the quantitative results are very similar for both data sets. Therefore, due to space constraints, we present only the results for the DELICIOUS data set. Resuming keys. We first consider the suspending and resuming of keys. While suspending keys is bandwidth-neutral, resuming keys adds to the overall workload. In the first test we vary the minimum of set bits in a bit vector $B_k$ specifying when to resume key $k$. We set $b^{\text{res}} = 0$, i.e., we suspend keys when no bit is set in the corresponding bit vector. Further, we set $\ell = 24$ and $\Delta^{\text{decay}} = 1h$. Thus, each request on a key $k$ is represented as a set bit in $B_k$ for 24h. Figure 5 shows the results for $b^{\text{res}} \in \{1, 2, 4, 8, 16\}$. In this figure we differentiate between the load only induced by processing user queries and the overall load to emphasize on the additional load caused by resuming keys. Processing user queries clearly benefits from smaller values for $b^{\text{res}}$, since the number of available multi-term keys increases, see Table II. However, the frequent resuming of keys adds to the overall load. For increasing values for $b^{\text{res}}$, since less keys are available in the index, the ratio between the load for resuming keys and processing queries shifts toward a higher load for processing queries, while the overall load stays quite equal. If $b^{\text{res}}$ becomes too large, thus the number of available keys too small, the decreasing load for resuming keys can no longer compensate for the increasing load caused by queries, and the overall load increases. The optimal value for $b^{\text{res}}$ is application-specific, i.e. it depends on the actual tag data and query history. This recommends the implementation of self-tuning mechanisms, to adapt $b^{\text{res}}$ according to the current load. | Resuming keys: various values for $b^{\text{res}}$ | |------------------|-----|-----|-----|-----|-----| | 1 | 2 | 4 | 8 | 16 | | 3.08% | 1.38%| 0.53%| 0.12%| 0.08%| Table II Relative index size compared to optimal index with all relevant keys available (‘Practically Empty’) In a second test we modify $\Delta^{\text{decay}}$, i.e. the time span a request on a key $k$ is represented by a set bit in $B_k$. Again, $\ell = 24$ and $b^{\text{res}} = 4$. Figure 6 shows the results for $\Delta^{\text{decay}} \in \{400s, 20min, 1h, 3h, 9h\}$, and Table II the resulting index sizes. Here, the load for resuming keys hardly changes for different values of $\Delta^{\text{decay}}$, since $\Delta^{\text{decay}}$ only specifies how long a key is kept in the index and not how soon. The overall performance increases for increasing values for $\Delta^{\text{decay}}$, since more and more keys are kept in the index, see Table II. Thus, since the number of multi-term keys are w.r.t. the storage requirements are still reasonable low, larger values for $\Delta^{\text{decay}}$ are beneficial. However, the more multi-term keys are available in the inverted index the higher the expected overhead to update them. Handling updates. As a consequence of our incremental update approach, processing queries might yield different results for STK and MTK. To quantify this, we compared the results for both approaches on the inverted index, after various numbers of updates on the inverted lists of single-term keys. We assumed an optimal index, i.e. all relevant multi-term keys are available. Regarding updates, this is the worst-case scenario, since MTK never has to invoke up-to-date single-term keys. Table III shows the results. Naturally, for an increasing number of updates, the average overlap between query results decreases. The degree of deviation which is still acceptable is a system design decision. | Changes in inverted lists of single-term keys (in %) | |------------------|-----|-----|-----|-----|-----| | 0.25% | 0.5%| 1% | 2% | 4% | 8% | | overlap | 99.1%| 98.6%| 97.6%| 95.7%| 92.3%| 86.4%| Table III Average overlap of query results between na"ive and multi-term approach for various rates of updates For our subsequent experiments we make the following assumptions: Users perform 150 actions per minute, which is more than twice the figure we derived from the DELICIOUS data set (cf. Table I). Further, to ensure an overlap of above 99%, we only tolerate 0.25% of changes in the inverted lists of single-term keys. With that, given the number of $\sim$10.9 Mio. inverted list entries of single-term keys, we have to update all available multi-term keys at least every \( \Delta^{update} = 3h \). We vary \( \Delta^{decay} \) and keep the other parameters fix (\( \ell = 24, s_{\text{max}} = 3, b^{res} = 4, b^{sus} = 0 \)). STK only requires the propagation of updates to single-term keys; MTK additionally requires incremental updates. Figure 7 shows the result. Since the incremental update of a multi-term key contacts each corresponding single-term key, the number of contacted keys significantly increases for larger values of \( \Delta^{decay} \). Now with updates, the saved number of transferred resources due to MTK does no longer benefit from many available keys. To sum up, our results indicate the trade-off between the query processing performance and the load for maintaining the index in the presence of updates w.r.t. the number of available multi-term keys in the index. A large index speeds up the evaluation of queries, but causes high maintenance costs, and vice versa. However, the improvements due to MTK regarding the overall bandwidth consumption significantly outweighs the maintenance costs. Despite our worst cases assumptions for the parameter settings, MTK reduces the number of transferred resources to 50% compared to STK. In real-world systems, we expect even better results. VI. CONCLUSION We outlined our architecture towards enriching traditional web search by friends augmented search techniques (FAST), i.e., exploiting the expertise, interests, perceptions and social ties of users. As front end, a browser add-on [4] allows users to collaborate with each other in various ways, yielding explicit networks based on a web of trust as well as implicit networks based on expertise or interests. For the application to scale, the knowledge base resulting from user contributions (tags, ratings, etc.), requires an efficient and scalable data management. To this end, we presented FAST’s back end infrastructure based on a distributed key-value store [16]. Our evaluation shows that this approach can cope with the peculiarities of FAST-like applications, particularly with high update rates on the knowledge base. In our ongoing work we emphasize on the user perspective. Relevant questions are, e.g., how do users work with the add-on, which features are most popular (or which are still missing), what is the expected load of the back end per user, and eventually how do users benefit from applications like FAST. To answer these and related questions, and gain deeper insights into friends-augmented search techniques, requires comprehensive empirical studies. To conduct such studies is challenging, and represents a major part of our long-term efforts. ACKNOWLEDGMENT. This work has been funded partly by A*Star SERC Grant No 072 134 0055, partly by NTU AcRF Tier 1 Grant RG 29/09 and partly by Science Foundation Ireland under Grant No. SFI/08/CE/I1380 (Lion-2). REFERENCES
{"Source-Url": "https://www.insight-centre.org/sites/default/files/publications/vdw-fast.pdf", "len_cl100k_base": 9951, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 33010, "total-output-tokens": 11061, "length": "2e13", "weborganizer": {"__label__adult": 0.00031638145446777344, "__label__art_design": 0.0008602142333984375, "__label__crime_law": 0.0003464221954345703, "__label__education_jobs": 0.0019178390502929688, "__label__entertainment": 0.00026726722717285156, "__label__fashion_beauty": 0.00020325183868408203, "__label__finance_business": 0.0006303787231445312, "__label__food_dining": 0.00039005279541015625, "__label__games": 0.0009064674377441406, "__label__hardware": 0.001434326171875, "__label__health": 0.0005278587341308594, "__label__history": 0.0004863739013671875, "__label__home_hobbies": 0.00012803077697753906, "__label__industrial": 0.0003178119659423828, "__label__literature": 0.0005216598510742188, "__label__politics": 0.0002923011779785156, "__label__religion": 0.0004243850708007813, "__label__science_tech": 0.1708984375, "__label__social_life": 0.0002180337905883789, "__label__software": 0.0958251953125, "__label__software_dev": 0.72216796875, "__label__sports_fitness": 0.00019991397857666016, "__label__transportation": 0.0004122257232666016, "__label__travel": 0.0002892017364501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44139, 0.02063]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44139, 0.20341]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44139, 0.87764]], "google_gemma-3-12b-it_contains_pii": [[0, 5129, false], [5129, 11212, null], [11212, 16952, null], [16952, 22477, null], [22477, 28837, null], [28837, 33161, null], [33161, 39225, null], [39225, 44139, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5129, true], [5129, 11212, null], [11212, 16952, null], [16952, 22477, null], [22477, 28837, null], [28837, 33161, null], [33161, 39225, null], [39225, 44139, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44139, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44139, null]], "pdf_page_numbers": [[0, 5129, 1], [5129, 11212, 2], [11212, 16952, 3], [16952, 22477, 4], [22477, 28837, 5], [28837, 33161, 6], [33161, 39225, 7], [39225, 44139, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44139, 0.15814]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
4345fd3be55f0edaab44cf01bbd8d4aef4d95bb5
A Bag of Useful Techniques for Unification-Based Finite-State Transducers Hans-Ulrich Krieger, Witold Drożdżyński, Jakub Piskorski, Ulrich Schäfer, Feiyu Xu German Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3, D-66123 Saarbrücken, Germany {krieger, witold, piskorsk, uschaefer, feiyu}@dfki.de Abstract We present several extensions to the shallow text processor SProUT, viz., (1) a fast imperfect unifiability test, (2) a special form of sets together with a polymorphic lazy and destructive unification operation, (3) a cheap form of negation, (4) a weak unidirectional form of coreferences, (5) optional context-free stages in the shallow cascade, (6) a compile time type check, (7) compile time transition sorting under subsumption, (8) several output merging techniques, and (9) a compaction technique for lexical resources. The extensions have been found relevant in several projects and might be of importance to other systems, even to deep processing. 1 Introduction In the last decade, a strong tendency of deploying lightweight linguistic analysis to the conversion of raw textual information into structured and valuable knowledge can be observed. Recent advances in the areas of information extraction, text mining, and textual question answering demonstrate the benefit of applying shallow text processing techniques. Systems employing such shallow techniques are assumed to be considerably less time-consuming and more robust than deep processing systems, but are still sufficient to cover a broad range of linguistic phenomena (Hobbs et al., 1997). This paper centers around several extensions to the shallow core engine of SProUT (Shallow Processing with Unification and Typed feature structures), a platform for the development of multilingual text processing systems (Becker et al., 2002; Drożdżyński et al., 2004). The extensions have been designed to either retain or even to speed up the run time performance of SProUT, and have been found useful in several other projects which employ SProUT to perform information extraction, hyperlinking, opinion mining, and text summarization. The extensions are worthwhile to be considered, not only by other shallow text processors, but even by deep processing engines. 1.1 SProUT The motivation for developing SProUT came from the need to have a system that (i) allows a flexible integration of different processing modules and (ii) to find a good trade-off between processing efficiency and expressiveness of the formalism. On the one hand, very efficient finite-state devices have been successfully employed in real-world applications. On the other hand, unification-based grammars are designed to capture fine-grained syntactic and semantic constraints, resulting in better descriptions of natural language phenomena. In contrast to finite-state devices, unification-based grammars are also assumed to be more transparent and more easily modifiable. Our idea now was to take the best of these two worlds, basically having a finite-state machine that operates on typed feature structures (TFSs). Thus transduction rules in SProUT do not rely on simple atomic symbols, but instead on TFSs, where the left-hand side (LHS) of a rule is a regular expression over TFSs representing the recognition pattern, and the right-hand side (RHS) is a TFS specifying the output structure. Consequently, equality of atomic symbols is replaced by unifiability of TFSs and the output is constructed using TFS unification w.r.t. a type hierarchy. 1.2 Structure of Paper The paper is structured as follows. The next section introduces XTDL, the formalism used in SProUT. Sections 3–11 then describe the extensions. Each of these sections explains the reasons for extending SProUT and estimates potential costs or even savings in terms of space and time, resp. We also try to motivate why these techniques might be of interest to other systems and paradigms. 2 XTDL—The Formalism in SProUT XTDL combines two well-known frameworks: typed feature structures and regular expressions. We assume a basic familiarity with these concepts here. 2.1 The Basis: TDL XTDL is defined on top of TDL, a definition language for TFSs (Krieger and Schäfer, 1994) that is used as a descriptive device in several grammar systems, such as LKB (Copestake, 1999), PAGE (Uszkoreit et al., 1994), or PET (Callmeier, 2000). We use the following fragment of TDL, including coreferences. \[ \begin{align*} \text{type-def} & \rightarrow \text{" : " atom } \\ \text{type} & \rightarrow \text{identifier} \end{align*} \] Apart from the integration into the rule definitions, we also employ this fragment in \textit{SProUT} for the establishment of a type hierarchy of linguistic entities. In the example definition below, the \texttt{morph} type inherits from \texttt{sign} and introduces four morphosyntactically motivated attributes, together with their corresponding values. \[ \text{morph} : = \text{sign} \land [\text{POS} \ \text{atom}, \ \text{STEM} \ \text{atom}, \ \text{INFL} \ \text{infl}, \ \text{SEGMENTATION} \ \ast \text{list}\ast]. \] The next figure depicts a fragment of the type hierarchy used in the example. \[ \begin{align*} \text{atom} & \to \ast \text{top}\ast \\ \text{tense} & \to \ast \text{avm}\ast \land \ast \text{rule}\ast \\ \text{sign} & \to \ast \text{index-avm}\ast \\ \text{infl} & \to \ast \text{present}\ast \land \ast \text{token}\ast \\ \text{morph} & \to \ast \text{morph}\ast \land \ast \text{lang}\ast \land \ast \text{token-type}\ast \\ \text{de} & \to \ast \text{en}\ast \land \ast \text{separator}\ast \land \ast \text{url}\ast. \end{align*} \] ### 2.2 The Regular Extension: \textit{XTDL} A rule in \textit{XTDL} is straightforwardly defined as a recognition pattern on the LHS, written as a regular expression, and an output description on the RHS. A label serves as a handle to the rule. Regular expressions over feature structures describe sequential entities. In the example definition below, the establishment of a type hierarchy of linguistic entities makes compositional processing modules simple, since input and output are all of the same abstract data type. \[ \begin{align*} \text{rule} & \to \text{rule-name} \ \ast : \ast \ \text{regex} \ \ast \rightarrow \ast \ \text{avm} \ \ast \ \text{fun-op} \ \ast \ \ast \\ \text{rule-name} & \to \text{identifier} \\ \text{regex} & \rightarrow \text{regex} \ | \ \text{regex} \ \text{regex} \ | \ \text{regex} \ | \ \text{regex} \ | \ \text{regex} \\ \text{fun-op} & \rightarrow \ \ast \ \text{where} \ \text{coref} \ \ast \ \ast \ \text{fun-app} \\ \text{fun-app} & \rightarrow \ \text{identifier} \ \ast \ \text{term} \ \ast \ \ast \ \text{term} \ \ast \ \ast \end{align*} \] The choice of \textit{TDL} as a basis for \textit{XTDL} has a couple of advantages. TFSs as such provide a rich descriptive language over linguistic structures (as opposed to atomic symbols) and allow for a fine-grained inspection of input items. They represent a generalization over pure atomic symbols. Unifiability as a test criterion (viz., whether a transition is viable) can be seen as a generalization over symbol equality. Coreferences in feature structures describe structural identity. Their properties are exploited in two ways. They provide a stronger expressiveness, since they create dynamic value assignments while following the transitions in the finite-state automaton, thus exceeding the strict locality of constraints in an atomic symbol approach. Furthermore, coreferences serve as the means for information transport into the output description on the RHS of a rule. Finally, the choice of feature structures as primary citizens of the information domain makes composition of processing modules simple, since input and output are all of the same abstract data type. ### 2.3 Example The \textit{XTDL} grammar rule below may illustrate the concrete syntax of the formalism. It describes a sequence of morphologically analyzed tokens of type \texttt{morph}. The first TFS matches one or zero items (?) with part-of-speech \texttt{det}. Then, zero or more \texttt{adj} items are accepted (*). Finally, one or two \texttt{noun} items (\{1,2\}) are consumed. The use of a variable (e.g., \#c) in different places establishes a coreference (i.e., a pointer) between features. This example enforces, e.g., agreement in case, number, and gender for the matched items. I.e., all adjectives must have compatible values for these features. If the recognition pattern on the LHS successfully matches the input, the description on the RHS creates a feature structure of type \texttt{phrase}. The category is coreferent with the category \texttt{noun} of the right-most token(s) and the agreement features result from the unification of the agreement features of the \texttt{morph} tokens. \[ \begin{align*} \text{np} : > \ & \text{morph} \land [\text{POS} \ \text{det}, \ \text{INFL} \ \text{infl}, \ \text{SEGMENTATION} \ \ast \text{list}\ast], \ \text{morph} \land [\text{POS} \ \text{adj}, \ \text{INFL} \ \text{infl}, \ \text{SEGMENTATION} \ \ast \text{list}\ast], \ \text{morph} \land [\text{POS} \ \text{noun} \ \& \ \text{cat}, \ \text{INFL} \ \text{infl}, \ \text{SEGMENTATION} \ \ast \text{list}\ast] \ (1,2) \\ \rightarrow \ & \text{phrase} \land [\text{CAT} \ \text{cat}, \ \text{AGR} \ \text{agr}, \ \text{INFL} \ \text{infl}, \ \text{SEGMENTATION} \ \ast \text{list}\ast]. \end{align*} \] ### 3 Imperfect Unifiability Test The challenge for the \textit{SProUT} interpreter is to combine regular expression matching with unification of TFSs. Since regular operators such as Kleene star cannot be expressed as a TFS (no functional uncertainty!), the interpreter is faced with the problem of mapping a regular expression to a corresponding sequence of input tokens, so that the coreference information among the elements in a rule is preserved. The solution is to separate the matching of regular patterns using unifiability (LHS of rules) from the construction of the output structure through unification (RHS). The positive side effect is that the unifiability test filters the potential candidates for the space-consuming final unification. Subsequently, a rule TFS with an instantiated LHS pattern is constructed. A TFS representation of a rule contains the two features IN and OUT. In contrast to the IN value in the matched input TFS representation, the IN value of the rule contains coreference information. The value of OUT is the TFS definition of the RHS of the rule. Given the input TFS and the uninstantiated rule TFS, the unification of the two structures yields the final output result. As is the case for deep parsing, usually more than 90% of all LHS applications fail, and since we use unification \( \land \) for testing unifiability, a lot of space and time is wasted. However, the things are not that bleak as they seem, since our unifier eliminates redundant copying of TFSs through lazy incremental copying to achieve a great deal of structure sharing (see next section). Modified structures can be reset using invalidate() which simply increments a global generation counter, such that a modification in the copy slot of a TFS is no longer considered. And so unifiability testing of two conjunctive TFSs in the early SProUT reduced to \( \bot \) denotes incompatible information) \[ \text{unifiable}(\text{TFS} \phi, \text{TFS} \psi) : \iff \\ \begin{align*} \text{success} &\iff \phi \land \psi = \bot \\ \text{success} &\iff \text{false}; \\ \text{else} &\iff \text{true}; \\ \text{invalidate}(); \\ \text{return} \text{success}; \end{align*} \] Nevertheless, about 90% of the run time in the interpreter was due to the unification operation, may it be used for unifiability testing or to build up RHS structures. One might now argue that unifiability testing should be as fast and cheap as checking subsumption or equivalence, but this is not the case: a correct unifiability test must record the effects of type unification, i.e., must allocate memory. The deeper reason why this is so comes from the use of coreferences in unification-based grammars. However, it is possible to implement an imperfect, but extremely fast unifiability test that does not require the space of standard unification. The test is imperfect in that there are very rare combinations of feature structures which are assumed to be unifiable, but which are not. Such combinations are detected later in the construction of the RHS of a rule when performing standard unification. The important thing, however, as explained above, is that almost all unifiability tests during grammar interpretation fail and for these negative cases, the fast test delivers a correct answer. During thorough experiments with the new unifiability test, even the positive answers were always right, i.e., subsequent RHS unifications had not failed. The structure of the new algorithm is similar to the subsumption/equivalence test in SProUT, except that type subsumption/equality is substituted by type unification which reduces to a table lookup (note that the pseudo code is slightly simplified and does not work for cyclic structures): \[ \text{unifiable}(\text{TFS} \phi, \text{TFS} \psi) : \iff \\ \begin{align*} \text{if} \phi = \psi &\iff \text{return true}; \\ \text{if unifyTypes}(\phi, \psi) = \bot &\iff \text{return false}; \\ \forall (\text{feat . } \phi') \in \phi \text{ and } (\text{feat . } \psi') \in \psi &\iff \text{return false}; \\ \text{return true}; \end{align*} \] Compared to the old test, we achieved a speedup by a factor of 5.5-8, depending on different shallow grammars. It is worth noting that this imperfect test is related to another imperfect technique used in deep parsing, viz., quick-check filtering (Kiefer et al., 1999). Our method does not require offline training and additional data structures (which quick-check filtering does) and is comparable in performance when using mainly flat TFSs (which is the case for shallow grammars). 4 Polymorphic Lazy Set Unification The original destructive lazy-copier unifier in SProUT was an optimized and corrected variant of (Emele, 1991) that was further extended by an efficient type unification operation, viz., bit vector bitwise AND on the type codes, together with result caching. The average-case complexity of computing the greatest lower bound (= result of type unification) is thus determined by a constant-time function. Compared to the implementation of the original algorithm in (Emele, 1991), our improved version yields a speedup of 2-4.5 (depending on the shallow grammars) by computing the shared and unique feature-value pairs \( \text{SharedArcs}_1, \text{SharedArcs}_2, \text{UniqueArcs}_1, \text{and UniqueArcs}_2 \) of the two input structures in parallel (plus further minor improvements). This new unifier together with the imperfect unifiability test drastically speed up system performance, turning the original ratio of unification/interpreter time from 90/10 to 25/75. Overall, the two modifications lead to a speedup factor of 20 on the average. During the development of SProUT, it turned out that the description language XTTC misses constructs that assist unordered collections of information. Clearly, FIRST/REST lists in conjunctive TFS are the usual means to achieve this. However, by unifying two lists, we do not collect the sole information from both lists. Instead, the corresponding positions are unified, and lists of different length will never unify. Applications such as information extraction must work around this 'phenomena' and either apply fixed-arity named templates, implement difference list to achieve a kind of set union, apply recursive type constraints, implement procedural attachment, or employ disjunctions as a way to express collective information. Explicit disjunctions in the TFS description language, however, have been consciously excluded, since they render almost linear (conjunctive) unification exponential. In order to account for unordered collections, we decided to implement a special kind of non-standard sets, viz., multisets (or bags), which might contain equivalent, even equal objects. Elements of a set are either TFSs or again sets, even the empty set. Unifying two sets $S_1, S_2$ means to take multiset-union of $S_1$ and $S_2$: $$S_1 \& S_2 := S_1 \cup S_2$$ This is an extremely cheap operation (even cheaper than normal set union) and is exactly what we need. Sets should not be confused with disjunctions, since the unification of two disjunctions $D_1, D_2$ is defined in terms of the unification of their elements, a very expensive operation: $$D_1 \& D_2 := \{ d_1 \& d_2 \mid d_1 \in D_1 \text{ and } d_2 \in D_2 \}$$ Up to now, we only considered the two cases that the unification method either takes two TFSs or two sets. But what is the result of unifying a TFS $\phi$ with a multiset $S$? We decided to open a third avenue here—this time we assume that the TFS argument acts as a filter on the elements of the multiset using unifiability. I.e., a unification failure leads to the deletion of the set element: $$\phi \& S := \{ s \mid s \in S \text{ and } \phi \& s \neq \bot \}$$ This useful operation has, like ‘normal’ set unification, a counterpart in Lexical Functional Grammar (Bresnan, 1982). And again, it is relatively cheap, due to the use of the fast unifiability test. Given these additions, unification now becomes a true polymorphic operation (and is realized this way in our JAVA implementation through method dispatching): <table> <thead> <tr> <th>&amp;</th> <th>TFS</th> <th>set</th> </tr> </thead> <tbody> <tr> <td>TFS</td> <td>$\land$</td> <td>$\in_c$</td> </tr> <tr> <td>set</td> <td>$\in_c$</td> <td>$\cup$</td> </tr> </tbody> </table> Note the subtle differences when using singleton sets. Assuming that $$\phi \& \psi = \bot$$ we have $$\phi \& \{ \psi \} = \{ \phi \} \& \psi = 0$$ but $$\{ \phi \} \& \{ \psi \} = \{ \phi, \psi \}$$ The important point is that the unifier in SProUT can be straightforwardly extended towards our special treatment of sets, without giving up any of the good properties of the original algorithm, viz., lazy non-redundant copying and almost linear run time complexity. And the good properties of the imperfect unifiability test can also be retained for multisets. We are currently investigating to include a C(++)-like malloc allocation scheme for TFSs which should have a drastic effect on the turnwise run time performance of SProUT. The idea is to avoid the creation of new TFS objects and the application of Java’s garbage collector, if possible, by having our own TFS memory management. TFSs which are no longer relevant and which should be reused must be freed so that the allocator can take care of. If new TFSs are requested by unification, we first reuse the old objects before creating new ones. ## 5 Testing Negation Negation in typed feature formalisms has often posed severe problems, either because of a complex or even wrong semantics, or because of a bad run-time performance. Classical negation of conjunctive TFS leads to the introduction of disjunctions (not that good as we have seen above), negated coreferences (easy!), and negated atoms/types (cheap!) when assuming negation normal form (Smolka, 1988). In case that negated information should be retained and accumulated in the TFS, additional memory must be allocated. Several SProUT users have demanded that it would be nice to come up with some form of negation in order to compactly rule out certain input items. Clearly, when considering only types, negation can be laboriously spelled out through the introduction of additional types. For instance, the type not-1st (or 2nd-or-3rd) might be introduced under type person with subtypes 2nd and 3rd, in order to “reformulate” the negated TFS [pers $\sim$ 1st]. We have decided to allow negation only on the LHS of a SProUT rule and only on top of a description. These requirements have several theoretical and practical advantages. Firstly, restricting ourselves to the LHS means that we only test whether an input item meets the negated information or not. As we will see, this test is extremely cheap and does not require additional memory. Secondly, since negation inside a TFS is forbidden, no additional memory must be spent to represent that information. As a consequence of the LHS restriction, negated information is clearly no longer accessible after a successful LHS match. In a SProUT rule, positive and negative information can be arbitrarily mixed, and a top-level negation sign can be attached to types, TFSs, and even to coreferences. Now, how does the negation test look like? When assuming a classical set-theoretical semantics for TFSs (as we do), the answer is really easy. Assume that the SProUT rule under inspection at a current input position contains the negated feature structure $\sim \phi$ and that the TFS for the input token is $\psi$. Testing for negation means that we want to decide whether $\sim \phi \land \psi = \bot$. The Venn diagram gives us the answer (let \([\cdot]\) refers to the denotation of a TFS and let \(\mathcal{U}\) represents the universe of all denotations): \[ \sim \phi \land \psi \text{ is not satisfiable, i.e., } [\sim \phi \land \psi] = \emptyset \iff [\psi] \subseteq [\phi]. \text{ This means that } \\ \sim \phi \land \psi = \bot \iff \psi \subseteq \phi \] I.e., only if \(\psi\) is more specific than or equivalent to \(\phi\) (\(\subseteq\)), rule interpretation has to be canceled. In every other case, there must exist elements in the denotation of \(\psi\) which are not in \(\phi\), i.e., \[ [\psi] \setminus [\phi] \neq \emptyset \] hence rule interpretation is allowed to continue. Testing for TFS subsumption (\(\subseteq\)) is again an extremely cheap operation. 6 Weak Unidirectional Coreferences Several projects using SProUT have revealed a missing descriptive means in the original formalism. This no man’s land concerns the accumulation of information under Kleene star/plus or restricted repetition. Consider, for instance, the np rule from section 2.3 and assume that adjectives also have a relation attribute RELN. Our intention now is to collect all those LHS relations and to have them grouped in a set (section 4) on the RHS of the rule. In order to achieve this, we have implemented the concept of a weak, unidirectional coreference constraint, indicated by the percent sign in the concrete syntax: \[ \text{np} :> \ldots \text{ (morph } \& \text{ [POS adj, } \ldots \text{, RELN } %r\ldots \text{ )} \ast \ldots \rightarrow \text{ phrase } \& \text{ [\ldots, RELN } %r\ldots \text{ ]} \] A usual coreference tag, say \(%r\) (instead of \(\%r\)) would enforce that the iterated values under RELN attribute are the same. Collecting the LHS information in a set (instead of a list) perfectly matches our treatment of set unification: the result set on the RHS can be further processed and extended by succeeding stages in the shallow processing cascade. Recall that lists do not (easily) allow the accumulation of information (cf. section 4). Implementing such weak coreferences is not that difficult and again extremely cheap. During Kleene expansion and restricted repetition, the rule interpreter introduces for each successful TFS instantiation and each occurrence of \(\%r\), a fresh variable which binds the value under the corresponding feature (in our case RELN). Consider, for instance, that the above np has matched two adjectives, before it has recognized the noun. The interpreter thus generates two bindings through the new variables \(\#r\_1\) and \(\#r\_2\) and constructs the set \([\#r\_1, \#r\_2]\) after a successful LHS match, as if the original RHS would have been \[ \text{phrase } \& \text{ [\ldots, RELN } \{\#r\_1, \#r\_2\} \text{ ]} \] 7 Context-Free Cascade Stages SProUT permits to call additional rules during the course of a single rule interpretation (like a call to a subprocedure in a programming language) through the use of the seek operator. There are no objections for a rule to call itself, what clearly extends the expressiveness of the formalism, making it context-free (like the related recursive transition networks (Conway, 1963) are). The use of SDL (Krieger, 2003) together with context-free stages even allows SProUT to recognize context-sensitive languages. The following example presents a simple grammar that matches \(n\) occurrences of “a” followed by \(n\) occurrences of “b” and counts \(n\) by representing it as a list of \(n\) bars |. Considering the recognition part of the rule, \([a^nb^n] \mid n > 0\) is, in fact, a context-free language. \[ S :> a \text{ (\#seek(S) } \& \text{ [COUNT } #1\text{]}\} \text{? b} \rightarrow \text{ [COUNT } <"!" \text{ } . \#1\text{]}. \] Note that we represent “a” and “b” as types a and b, whose surface form is “a” and “b”, resp. \[ a := \text{ token } \& \text{ [SURFACE } "a"\text{]}. \] \[ b := \text{ token } \& \text{ [SURFACE } "b"\text{]} \] In some special cases, a sequence of seek calls can generate a rule cycle. Of course, if no input is consumed within a cycle of seek calls, we end up in an infinite loop at runtime. To avoid such a situation, we implemented a special left-recursion check that is performed during the compilation of the grammar and which generates a compile time error, if the grammar contains infinite cycles of seek calls. Concerning runtime performance, there are two aspects to be considered when using seek. Firstly, regular languages which are specified with the help of seek can be rewritten using only regular operators. In such cases, the efficiency of a system which provides seek although a grammar does not use it is on par with an implementation that does not have the possibility of calling seek. For each automaton state, our system is given a disjoint partition of outgoing edges, viz., a set of seek edges and a set of non-seek edges. These sets are computed at compile time and testing for an empty seek set is negligible. Secondly, applying seek at runtime forces the interpreter to produce new binding environments. To improve efficiency here, we introduced an optimizing mechanism for seek calls. The compiler tries to replace seek calls with the body of the called rule in case the RHS of the rule is empty (not possible in case of a circle). There exist several other configurations which are recognized by the compiler and which obviates the generation of new environments. Such optimizations make the compiled finite-state automaton larger, but can lead to a speedup of up to 30%, depending on the grammar. 8 Compile Time Type Check The basic building blocks of SPoUT rules are typed feature structures. A compile time type check has been added to the system, checking appropriateness of features, well-typedness of feature structures, and strong typing of rule definitions. (Carpenter, 1992) introduces formal definitions for appropriateness and well-typedness. Informally, a feature in a TFS is said to be appropriate if the type bearing it or one of the supertypes introduces the feature. A SPoUT rule meets the appropriateness condition if every feature, relative to the TFS it occurs in, is appropriate. A SPoUT rule is well-typed if every feature that occurs is appropriate and has an appropriate value, i.e., a value that is subsumed by the value of the feature of the type that introduces that feature. Finally, we say that a SPoUT rule is strongly typed if every feature structure, occurring in it and bearing at least one feature, also has a type that is more specific than the most general type of the type hierarchy. To sum up, strong typing, appropriateness, and well-typedness conditions impose additional constraints on typed feature structures occurring in rules. These restrictions are defined in and imposed by the TDL type hierarchy associated with the rule grammar. Practical advantages of these meta-constraints in grammar formalisms (not even in SPoUT) are threefold (there are others as well, such as type inference; cf. (Schäfer, 1995)). Debugging, safety and maintainability. Conceptual or typographical errors in SPoUT rules (e.g., feature names or types) are likely to be detected at compile time, due to the above restrictions. Portability of grammars. Many implemented TFS formalisms require feature structures to meet some of the above conditions. Grammars written for a less restricted formalism may not work on systems requiring appropriateness without major changes. Efficiency. Generally, strong typing and restriction to a fixed set of features can be exploited for compact representations. We give a small example where type checking at compile time leads to efficiency gains at runtime by revealing impossible unifications. Given the following SPoUT rules \[ S :> \ldots \rightarrow z \& \{ \ldots \}. U :> @@seek(S) \& x \& \{ \ldots \} \rightarrow \ldots. V :> @@seek(S) \& y \& \{ \ldots \} \rightarrow \ldots. \] and assume that \(z \land x \neq \bot\), but \(z \land y = \bot\). Compile time type checking then uncovers that the LHS of rule \(V\) is inconsistent under all possible interpretations. Related to this technique is rule filtering in deep parsing (Kiefer et al., 1999) and partial evaluation, known from logic programming. For appropriateness checking, the system builds up a table of features that are admissible for a type when reading in type definitions. Well-typedness is checked on the basis of prototypical feature structures that are associated with each type defined in the type hierarchy. Checking well-typedness, appropriateness, and strong typing is achieved by recursively walking through a SPoUT rule (represented in an intermediate XML expression which in turn has been generated on the basis of a JavaCC parser for ATDL), checking types and features at every feature structure node. Error messages with detailed reasons and links to character positions in ATDL source files are generated. In addition to the type check, a unification of incompatible types (conjunction of types with the \& operator) in type expressions is signaled, and an information is issued when two types in a conjunctive type expression have a common subtype in the type hierarchy. Furthermore, the seek operator undergoes a special handling: For appropriateness and well-typedness checking, the output type and feature structure of the called SPoUT rule is computed, and checked together with the feature structure (if present) of the calling part. 9 Transition Sorting Since ATDL grammars are compiled into finite-state devices whose edges are labeled with TFSs, the standard finite-state optimization techniques can not be exploited directly. The application of conventional determinization and minimization neither reduces the size, nor the degree of nondeterminism of the finite-state network significantly. Instead of applying these optimizations to non-atomic TFS-labeled edges, we have introduced a technique for ordering the outgoing edges of an automaton state, which resembles topological sorting of acyclic graphs. To be more precise, we sort all outgoing transitions of a given state via the computation of a transition hierarchy under TFS subsumption. Obviously, such an ordering can be computed offline, since edge labels do not change at run time. In the process of traversing an extended finite-state grammar, these transition hierarchies are utilized for inspecting outgoing transitions from a given state, starting with the least specific transition(s) first (remember, TFS subsumption induces only a partial order), and moving downwards in the hierarchy, if necessary. The important point now is that if a less specific TFS does not match, the more specific ones will not match as well, hence the corresponding edges need not be inspected (the fast unifiability test is employed here as well). In this manner, the number of time-consuming unifications can be potentially reduced. Our initial experiments reveal that the utilization of this data structure, in particular, comes in handy in case of the initial state and its close neighbors (the initial state in one of our test grammar had about 700 out- going transitions). However, since most of the states have only two outgoing edges on the average, the application of transition sorting to all states is not a good idea in terms of efficiency. Therefore, a threshold on the number of outgoing arcs is used in order to classify states for which the transition sorting is applied. Due to the fact that transition hierarchies, created solely from the arcs in the grammar, exhibit a somewhat flat character (average depth: 2–3), we provide a further option for deepening and fine-tuning them through a semi-automatic introduction of artificial edges. Firstly, for each major type which occurs in the grammar, e.g., morph, token, and gazetteer, nodes in the transition hierarchy are introduced. Secondly, for all appropriate features \( f \) of a given major type \( t \) and all feature-value pairs \([f \ v_1], \ldots, [f \ v_k]\) found in the grammar, we introduce additional nodes in the transition hierarchy, representing \( t \& \sim [f \ v_1, \ldots, v_k] \), i.e., TFSs whose \( f \) value is different from \( v_1, \ldots, v_k \). Finally, for all TFSs \( t \& [f \ v_i] \), we compute a separate table of links to the corresponding least specific nodes. These rough extensions allow for an efficient traversal of the hierarchy while processing input TFSs like for instance \( t' \& [\ldots, f \ v] \) \( (t' \sqsubseteq t) \). Transition sorting, as briefly introduced here, proves to speed up the grammar interpreter by a factor of 3–4. The proximate paper (Krieger and Piskorski, 2004) gives a deeper insight into the application of this and related techniques. 10 Output Merging Techniques Shallow analysis in SProUT (and other IE systems) often yields multiple results for a single recognized chunk, originating from different ambiguity sources. Local ambiguities. The lexicon contains morphosyntactic (e.g., gender, number, person) and semantic (senses) variations which might blow up the search space of the grammar interpreter, resulting in multiple readings. There exist, however, several techniques which help to lower the ambiguity rate by compacting and unifying lexicon entries (see next section). Typed feature structures are a necessary requirement for applying such techniques. Spurious ambiguities. Depending on the form of the grammar, multiple recursive rule calls might lead to attachment ambiguities which however produce equivalent RHS structures. In case we are only interested in the output (which usually is the case), we are allowed to remove such duplicate TFSs. Rule ambiguities. We have often encountered rule sets, which, for specific input items, produce output structures that are related according to their degree of informativeness. I.e., we have found structures within these results which are more general or more specific than others. In each of the above cases, we locally reduce the number of output TFSs for a fixed input span with-out giving up any information. This is achieved by the virtue of TFS equivalence, TFS subsumption, and TFS unifiability/unification. In SProUT, a user can specify (i) whether the output TFSs are left untouched, (ii) whether duplicate structure should be removed, (iii) whether the most general-specific structures only prevail, or (iv) whether we maximally integrate the output structures through unification. Very often, a single TFS remains at the end. Due to the fact that the SProUT interpreter employs a longest-match strategy, further ambiguities are avoided at this stage. Merging is only applied at the very end of a single stage in the shallow cascade, and thus not very expensive overall. Worst-case running time is a quadratic function in the number of output structures. The TFS operations involved in this merging are cheap, as explained in the previous sections. 11 Compacting Lexical Resources Morphological resources in SProUT are usually built on top of the full form lexical databses of MMorph. However, many lexical entries in MMorph possess spurious ambiguities. When integrating MMorph lexicons as they are, a runtime system might have a serious space problem, and in particular, performs redundant unifications. We have developed a method which compacts MMorph resources by replacing several readings through a compact reading, by deleting redundant readings, and by substituting specialized readings through more general ones, using type generalization and subsumption checking. These techniques go hand in hand with a moderate enlargement of the original type hierarchy (no additional costs—recall that average-case complexity of type unification can be estimated by a constant-time function) and increase the efficiency of systems using MMorph, since they shrink the size of the lexicon and come up with fewer readings for a morphological form. Clearly, such an approach not only is interesting to MMorph, but also to other lexicons, which build on an feature-value representation of lexical information. Entries in MMorph relate word forms to their base forms and their morphosyntactic descriptions (MSDs), which are sets of flat feature-value pairs. Here are two of the 11 MMorph readings of the German word “evaluieren” (to evaluate, evaluated): ``` Verb[mode=indicative vform=fin tense=imperfect number=plural person=13 ...] Adjective[gender=masc|fem|neutrum number=singular case=nom|gen|dat|acc degree=pos ...] ``` MMorph represents atomic disjunctions by using the vertical bar, e.g., 113 (see example). We represent such disjunctions through exhaustive disjunctive types, e.g., 1st_or_3rd, together with proper type definitions, e.g., \[ 1st := 1st_or_2nd \& 1st_or_3rd \] These types are automatically generated by our method when reading in an MMorph data base. Given a full form database, we store information for the same word form (example: evaluierten) in an index structure of the following form (POS abbreviates part of speech): \[ \begin{align*} \text{word form} & \rightarrow \text{POS}_1 \rightarrow \text{stem}_{11} \rightarrow \text{set of MSDs} \\ & \hspace{1cm} \vdots \hspace{1cm} \vdots \hspace{1cm} \vdots \\ & \rightarrow \text{POS}_n \rightarrow \text{stem}_{n1} \rightarrow \text{set of MSDs} \\ & \hspace{1cm} \vdots \hspace{1cm} \vdots \hspace{1cm} \vdots \\ & \rightarrow \text{set of appropriate values} \\ & \rightarrow \text{set of appropriate values} \end{align*} \] An MSD is encoded as a table of the following form: \[ \begin{align*} \text{feature}_1 & \rightarrow \text{set of appropriate values} \\ & \hspace{1cm} \vdots \\ \text{feature}_l & \rightarrow \text{set of appropriate values} \end{align*} \] Given the set of all MSDs \( M \) for a specific word form, the compaction method applies the following operations to \( m_1, m_2 \in M \), until \( M \) remains constant (i.e., until a fixpoint is reached): - **Equality test**: If \( m_1 = m_2 \), remove \( m_1 \) from \( M \). - **Subsumption test**: If \( m_1 \) is a subset of values of features in \( m_2 \), remove \( m_2 \) from \( M \) (\( m_2 \) is more general than \( m_1 \)). - **Set union**: If \( m_1 \) differs from \( m_2 \) at only one feature \( f \), then merge the two values, remove \( m_1 \) from \( M \), and replace the value of \( f \) in \( m_2 \) by \( v \), where \( v \coloneqq m_1 \cup m_2 \). Only 195 type definitions are produced by the above method for the German lexicon. Overall, the average speedup measured for the German named entity grammars in SProUT was about a factor of 3. A thorough investigation of the approach is presented in (Krieger and Xu, 2003). We are currently investigating the impact of packing morphosyntactical information across several features-value pairs. Our automated compaction method can be easily extended to handle such superfeatures/-values. **Acknowledgments** This paper has benefited from discussions with our colleagues Stephan Busenmann, Bernd Kiefer, and Hans Uszkoreit. Thanks much! We also like to thank our reviewers for their awesome assessments. This work has been partially funded by the German BMDF under grant nos. 01 IN A01 (Collate) \& 01 IW C02 (Quetal), and by the EU under grant nos. IST-12179 (Airforce), IST-2000-25045 (Memphis), and IST-2001-37836 (DeepThought). **References**
{"Source-Url": "http://www.dfki.de/dfkibib/publications/docs/sproutKONVENS2004.pdf", "len_cl100k_base": 9818, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 30309, "total-output-tokens": 11375, "length": "2e13", "weborganizer": {"__label__adult": 0.0003871917724609375, "__label__art_design": 0.0005078315734863281, "__label__crime_law": 0.0004549026489257813, "__label__education_jobs": 0.0009684562683105468, "__label__entertainment": 0.00014495849609375, "__label__fashion_beauty": 0.00020515918731689453, "__label__finance_business": 0.00025653839111328125, "__label__food_dining": 0.0003960132598876953, "__label__games": 0.0006451606750488281, "__label__hardware": 0.0008568763732910156, "__label__health": 0.000644683837890625, "__label__history": 0.0004057884216308594, "__label__home_hobbies": 9.751319885253906e-05, "__label__industrial": 0.0005655288696289062, "__label__literature": 0.0010251998901367188, "__label__politics": 0.00040841102600097656, "__label__religion": 0.0007147789001464844, "__label__science_tech": 0.12017822265625, "__label__social_life": 0.00014078617095947266, "__label__software": 0.0157928466796875, "__label__software_dev": 0.85400390625, "__label__sports_fitness": 0.0002944469451904297, "__label__transportation": 0.0005249977111816406, "__label__travel": 0.0002112388610839844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42931, 0.01372]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42931, 0.48875]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42931, 0.86878]], "google_gemma-3-12b-it_contains_pii": [[0, 4569, false], [4569, 10120, null], [10120, 15724, null], [15724, 20990, null], [20990, 26387, null], [26387, 32145, null], [32145, 37794, null], [37794, 42931, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4569, true], [4569, 10120, null], [10120, 15724, null], [15724, 20990, null], [20990, 26387, null], [26387, 32145, null], [32145, 37794, null], [37794, 42931, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42931, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42931, null]], "pdf_page_numbers": [[0, 4569, 1], [4569, 10120, 2], [10120, 15724, 3], [15724, 20990, 4], [20990, 26387, 5], [26387, 32145, 6], [32145, 37794, 7], [37794, 42931, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42931, 0.01556]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e82f9958c55a233c50851dbb12be26ced6fd2c78
[REMOVED]
{"Source-Url": "https://inria.hal.science/inria-00504047/file/main.pdf", "len_cl100k_base": 9734, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 41742, "total-output-tokens": 11791, "length": "2e13", "weborganizer": {"__label__adult": 0.0003235340118408203, "__label__art_design": 0.00023543834686279297, "__label__crime_law": 0.0002968311309814453, "__label__education_jobs": 0.0003962516784667969, "__label__entertainment": 4.64320182800293e-05, "__label__fashion_beauty": 0.00012362003326416016, "__label__finance_business": 0.00014150142669677734, "__label__food_dining": 0.0002655982971191406, "__label__games": 0.0003705024719238281, "__label__hardware": 0.0005245208740234375, "__label__health": 0.0003380775451660156, "__label__history": 0.00016117095947265625, "__label__home_hobbies": 6.175041198730469e-05, "__label__industrial": 0.0002694129943847656, "__label__literature": 0.00017011165618896484, "__label__politics": 0.00022780895233154297, "__label__religion": 0.0003893375396728515, "__label__science_tech": 0.0055999755859375, "__label__social_life": 8.291006088256836e-05, "__label__software": 0.003971099853515625, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00028586387634277344, "__label__transportation": 0.0003638267517089844, "__label__travel": 0.00018262863159179688}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44091, 0.06526]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44091, 0.41719]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44091, 0.85723]], "google_gemma-3-12b-it_contains_pii": [[0, 1128, false], [1128, 3945, null], [3945, 6748, null], [6748, 10161, null], [10161, 13253, null], [13253, 15809, null], [15809, 18612, null], [18612, 20981, null], [20981, 23954, null], [23954, 26394, null], [26394, 27634, null], [27634, 30638, null], [30638, 33719, null], [33719, 37246, null], [37246, 40297, null], [40297, 43633, null], [43633, 44091, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1128, true], [1128, 3945, null], [3945, 6748, null], [6748, 10161, null], [10161, 13253, null], [13253, 15809, null], [15809, 18612, null], [18612, 20981, null], [20981, 23954, null], [23954, 26394, null], [26394, 27634, null], [27634, 30638, null], [30638, 33719, null], [33719, 37246, null], [37246, 40297, null], [40297, 43633, null], [43633, 44091, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44091, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44091, null]], "pdf_page_numbers": [[0, 1128, 1], [1128, 3945, 2], [3945, 6748, 3], [6748, 10161, 4], [10161, 13253, 5], [13253, 15809, 6], [15809, 18612, 7], [18612, 20981, 8], [20981, 23954, 9], [23954, 26394, 10], [26394, 27634, 11], [27634, 30638, 12], [30638, 33719, 13], [33719, 37246, 14], [37246, 40297, 15], [40297, 43633, 16], [43633, 44091, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44091, 0.14371]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
4f4e00c4e35bc0004f8a532e8789f1d9be1ce533
This is “Understanding Software: A Primer for Managers”, chapter 9 from the book Getting the Most Out of Information Systems (index.html) (v. 2.0). This book is licensed under a Creative Commons by-nc-sa 3.0 (http://creativecommons.org/licenses/by-nc-sa/3.0/) license. See the license for more details, but that basically means you can share this book as long as you credit the author (but see below), don't make money from it, and do make it available to everyone else under the same terms. This content was accessible as of December 29, 2012, and it was downloaded then by Andy Schmitz (http://lardbucket.org) in an effort to preserve the availability of this book. Normally, the author and publisher would be credited here. However, the publisher has asked for the customary Creative Commons attribution to the original publisher, authors, title, and book URI to be removed. Additionally, per the publisher's request, their name has been removed in some passages. More information is available on this project's attribution page (http://2012books.lardbucket.org/attribution.html?utm_source=header). For more information on the source of this book, or why it is available for free, please see the project's home page (http://2012books.lardbucket.org/). You can browse or download additional books there. Chapter 9 Understanding Software: A Primer for Managers 9.1 Introduction LEARNING OBJECTIVES 1. Recognize the importance of software and its implications for the firm and strategic decision making. 2. Understand that software is everywhere; not just in computers, but also cell phones, cars, cameras, and many other technologies. 3. Know what software is and be able to differentiate it from hardware. 4. List the major classifications of software and give examples of each. We know computing hardware is getting faster and cheaper, creating all sorts of exciting and disruptive opportunities for the savvy manager. But what’s really going on inside the box? It’s software that makes the magic of computing happen. Without software, your PC would be a heap of silicon wrapped in wires encased in plastic and metal. But it’s the instructions—the software code—that enable a computer to do something wonderful, driving the limitless possibilities of information technology. Software is everywhere. An inexpensive cell phone has about one million lines of code, while the average car contains nearly one hundred million. R. Charette, “Why Software Fails,” IEEE Spectrum, September 2005. In this chapter we’ll take a peek inside the chips to understand what software is. A lot of terms are associated with software: operating systems, applications, enterprise software, distributed systems, and more. We’ll define these terms up front, and put them in a managerial context. A follow-up chapter, Chapter 10 "Software in Flux: Partly Cloudy and Sometimes Free", will focus on changes impacting the software business, including open source software, software as a service (SaaS), and cloud computing. These changes are creating an environment radically different from the software industry that existed in prior decades—confronting managers with a whole new set of opportunities and challenges. Managers who understand software can better understand the possibilities and impact of technology. They can make better decisions regarding the strategic value of IT and the potential for technology-driven savings. They can appreciate the challenges, costs, security vulnerabilities, legal and compliance issues, and limitations involved in developing and deploying technology solutions. In the next two chapters we will closely examine the software industry and discuss trends, developments and economics—all of which influence decisions managers make about products to select, firms to partner with, and firms to invest in. **What Is Software?** When we refer to computer hardware (sometimes just hardware), we’re talking about the physical components of information technology—the equipment that you can physically touch, including computers, storage devices, networking equipment, and other peripherals. Software refers to a computer program or collection of programs—sets of instructions that tell the hardware what to do. Software gets your computer to behave like a Web browser or word processor, makes your iPod play music and video, and enables your bank’s ATM to spit out cash. It’s when we start to talk about the categories of software that most people’s eyes glaze over. To most folks, software is a big, incomprehensible alphabet soup of acronyms and geeky phrases: OS, VB, SAP, SQL, to name just a few. Don’t be intimidated. The basics are actually pretty easy to understand. But it’s not soup; it’s more of a layer cake. Think about computer hardware as being at the bottom of the layer cake. The next layer is the operating system, the collection of programs that control the hardware. Windows, Mac OS X, and Linux are operating systems. On top of that layer are applications—these can range from end-user programs like those in Office, to the complex set of programs that manage a business’s inventory, payroll, and accounting. At the top of the cake are users. --- 3. The software that controls the computer hardware and establishes standards for developing and executing applications. 4. Includes desktop applications, enterprise software, utilities, and other programs that perform specific tasks for users and organizations. --- The flexibility of these layers gives computers the customization options that managers and businesses demand. Understanding how the layers relate to each other helps you make better decisions on what options are important to your unique business needs, can influence what you buy, and may have implications for everything from competitiveness to cost overruns to security breaches. What follows is a manager’s guide to the main software categories with an emphasis on why each is important. KEY TAKEAWAYS - Software refers to a computer program or collection of programs. It enables computing devices to perform tasks. - You can think of software as being part of a layer cake, with hardware at the bottom; the operating system controlling the hardware and establishing standards, the applications executing one layer up, and the users at the top. - How these layers relate to one another has managerial implications in many areas, including the flexibility in meeting business demand, costs, legal issues, and security. - Software is everywhere—not just in computers, but also in cell phones, cars, cameras, and many other technologies. QUESTIONS AND EXERCISES 1. Explain the difference between hardware and software. 2. Why should a manager care about software and how software works? What critical organizational and competitive factors can software influence? 3. What role has software played in your decision to select certain products? Has this influenced why you favored one product or service over another? 4. Find the Fortune 500 list online. Which firm is the highest ranked software firm? While the Fortune 500 ranks firms according to revenue, what’s this firm’s profitability rank? What does this discrepancy tell you about the economics of software development? Why is the software business so attractive to entrepreneurs? 5. Refer to earlier chapters (and particularly to Chapter 2 "Strategy and Technology: Concepts and Frameworks for Understanding What Separates Winners from Losers"): Which resources for competitive advantage might top software firms be able to leverage to ensure their continued dominance? Give examples of firms that have leveraged these assets, and why they are so strong. 9.2 Operating Systems **LEARNING OBJECTIVES** 1. Understand what an operating system is and why computing devices require operating systems. 2. Appreciate how embedded systems extend Moore’s Law, allowing firms to create “smarter” products and services Computing hardware needs to be controlled, and that’s the role of the operating system. The operating system (sometimes called the “OS”) provides a common set of controls for managing computer hardware, making it easier for users to interact with computers and for programmers to write application software. Just about every computing device has an operating system—desktops and laptops, enterprise-class server computers, your mobile phone. Even specialty devices like iPods, video game consoles, and television set top boxes run some form of OS. Some firms, like Apple and Nintendo, develop their own proprietary OS for their own hardware. Microsoft sells operating systems to everyone from Dell to the ATM manufacturer Diebold (listen for the familiar Windows error beep on some cash machines). And there are a host of specialty firms, such as Wind River (purchased by Intel), that help firms develop operating systems for all sorts of devices that don’t necessarily look like a PC, including cars, video editing systems, and fighter jet control panels. Anyone who has used both a PC and a Mac and has noticed differences across these platforms can get a sense of the breadth of what an operating system does. Even for programs that are otherwise identical for these two systems (like the Firefox browser), subtitle differences are visible. Screen elements like menus, scroll bars, and window borders look different on the Mac than they do in Windows. So do the dialogue boxes that show up when you print or save. These items look and behave differently because each of these functions touches the hardware, and the team that developed Microsoft Windows created a system distinctly different from their Macintosh counterparts at Apple. Graphical **user interface (UI)** items like scroll bars and menus are displayed on the hardware of the computer display. Files are saved to the hardware of a hard drive or other storage device. Most operating systems also include control panels, desktop file management, and other support programs to work directly with hardware elements. like storage devices, displays, printers, and networking equipment. The Macintosh Finder and the Windows Explorer are examples of components of these operating systems. The consistent look, feel, and functionality that operating systems enforce across various programs help make it easier for users to learn new software, which reduces training costs and operator error. See Figure 9.2 for similarities and differences. **Figure 9.2** *Differences between the Windows and Mac operating systems are evident throughout the user interface, particularly when a program interacts with hardware.* Operating systems are also designed to give programmers a common set of commands to consistently interact with the hardware. These commands make a programmer’s job easier by reducing program complexity and making it faster to write software while minimizing the possibility of errors in code. Consider what an OS does for the Wii game developer. Nintendo’s Wii OS provides Wii programmers with a set of common standards to use to access the Wiimote, play sounds, draw graphics, save files, and more. Without this, games would be a lot more difficult to write, they’d likely look differently, be less reliable, would cost more, and there would be fewer titles available. Similarly, when Apple provided developers with a common set of robust, easy-to-use standards for the iPhone and (via the App Store) an easy way for users to install these applications on top of the iPhone/iPod touch/iPad’s operating system (iOS), software development boomed, and Apple became hands-down the most versatile mobile computing device available. The iPhone and iPod touch OS is derived from Apple’s Mac OS X operating system. In Apple’s case some fifty thousand apps became available through the App Store in less than a year. A good OS and software development platform can catalyze network effects (see Chapter 6 "Understanding Network Effects"). While the OS seems geeky, its effective design has very strategic business implications! **Figure 9.3** Operating System Market Share for Desktop, Server, and Mobile Phones *Source: HitsLink (desktop, January 2011), IDC (server, Q1 2011), and Canalys.com (mobile, January 2011).* Firmware and Embedded Systems Most personal computers have an operating system installed on their hard drives. This system allows the OS to be replaced or upgraded easily. But many smaller, special-purpose computing devices have their operating systems installed on nonvolatile memory, often on read-only memory (ROM) chips. Control programs stored on chips are sometimes referred to as **firmware**. The OS in an iPod, mobile phone, or your TV’s set-top box is most likely stored as firmware. Your PC also has a tiny bit of firmware that allows it to do very basic functions like start-up (boot) and begin loading its operating system from disk. Another term you might hear is **embedded systems**. As computing gets cheaper, special-purpose technology is increasingly becoming embedded into all sorts of devices like cars, picture frames, aircraft engines, photocopiers, and heating and air conditioning systems. The software programs that make up embedded systems are often stored as firmware too. Moore’s Law (see [Chapter 5 “Moore’s Law: Fast, Cheap Computing and What It Means for the Manager”](#)) enables embedded systems, and these systems can create real strategic value. The Otis Elevator Company, a division of United Technologies, uses embedded systems in its products to warn its service centers when the firm’s elevators, escalators, and moving walkways need maintenance or repair. This warning provides Otis with several key benefits: 1. Since products automatically contact Otis when they need attention, these systems generate a lucrative service business for the firm and make it more difficult for third parties to offer a competing business servicing Otis products. 2. Products contact service technicians to perform maintenance based on exact needs (e.g., lubricant is low, or a part has been used enough to be replaced) rather than guessed schedules, which makes service more cost-effective, products less likely to break down, and customers happier. 3. Any product failures are immediately detected, with embedded systems typically dispatching technicians before a client’s phone call. 4. The data is fed back to Otis’s R&D group, providing information on reliability and failure so that engineers can use this info to design better products. --- 6. Software stored on nonvolatile memory chips (as opposed to being stored on devices such as hard drives or removable discs). Despite the seemingly permanent nature of firmware, many products allow for firmware to be upgraded online or by connecting to another device. 7. Special-purpose software designed and included inside physical products (often on firmware). Embedded systems help make devices “smarter,” sharing usage information, helping diagnose problems, indicating maintenance schedules, providing alerts, or enabling devices to take orders from other systems. Collectively, software embedded on tiny chips yields very big benefits, for years helping Otis remain at the top of its industry. **KEY TAKEAWAYS** - The operating system (OS) controls a computer’s hardware and provides a common set of commands for writing programs. - Most computing devices (enterprise-class server computers, PCs, phones, set-top boxes, video games, cars, the Mars Rover) have an operating system. - Some products use operating systems provided by commercial firms, while others develop their own operating system. Others may leverage open source alternatives (see Chapter 10 "Software in Flux: Partly Cloudy and Sometimes Free"). - Embedded systems are special-purpose computer systems designed to perform one or a few dedicated functions, and are frequently built into conventional products like cars, air conditioners, and elevators. - Embedded systems can make products and services more efficient, more reliable, more functional, and can enable entire new businesses and create or reinforce resources for competitive advantage. 1. What does an operating system do? Why do you need an operating system? How do operating systems make a programmer’s job easier? How do operating systems make life easier for end users? 2. How has the market for desktop, server, and mobile operating systems changed in recent years? Do certain products seem to be gaining traction? Why do you think this is the case? 3. What kinds of operating systems are used in the devices that you own? On your personal computer? Your mobile phone? The set-top box on top of your television? Are there other operating systems that you come into contact with? If you can’t tell which operating system is in each of these devices, see if you can search the Internet to find out. 4. For your list in the prior question (and to the extent that you can), diagram the hardware/software “layer cake” for these devices. 5. For this same list, do you think each device’s manufacturer wrote all of the software that you use on these devices? Can you add or modify software to all of these devices? Why or why not? What would the implications be for cost, security, complexity, reliability, updates and upgrades, and the appeal of each device? 6. Some ATM machines use Windows. Why would an ATM manufacturer choose to build its systems owing Windows? Why might it want to avoid this? Are there other non-PC devices you’ve encountered that were running some form of Windows? 7. What are embedded systems? When might firms want to install software on chips instead of on a hard drive? 8. It’s important to understand how technology impacts a firm’s strategy and competitive environment. Consider the description of Otis elevator’s use of embedded systems. Which parts of the value chain does this impact? How? Consider the “five forces”: How does the system impact the firm’s competitive environment? Are these systems a source of competitive advantage? If not, explain why not? If they are, what kinds of resources for competitive advantage can these kinds of embedded systems create? 9. Can you think of other firms that can or do leverage embedded systems? Provide examples and list the kinds of benefits these might offer firms and consumers. 10. Research the Americans with Disabilities Act of 1990 (or investigate if your nation has a similar law), and the implications of this legislation for software developers and Web site operators. Have firms been successfully sued when their software or Web sites could not be accessed by users with physical challenges? What sorts of issues should developers consider when making their products more accessible? What practices might they avoid? 9.3 Application Software LEARNING OBJECTIVES 1. Appreciate the difference between desktop and enterprise software. 2. List the categories of enterprise software. 3. Understand what an ERP (enterprise resource planning) software package is. 4. Recognize the relationship of the DBMS (database system) to the other enterprise software systems. 5. Recognize both the risks and rewards of installing packaged enterprise systems. Operating systems are designed to create a platform\(^8\) so that programmers can write additional applications, allowing the computer to do even more useful things. While operating systems control the hardware, application software (sometimes referred to as software applications, applications, or even just apps) perform the work that users and firms are directly interested in accomplishing. Think of applications as the place where the users or organization’s real work gets done. As we learned in Chapter 6 "Understanding Network Effects", the more application software that is available for a platform (the more games for a video game console, the more apps for your phone), the more valuable it potentially becomes. Desktop software\(^9\) refers to applications installed on a personal computer—your browser, your Office suite (e.g., word processor, spreadsheet, presentation software), photo editors, and computer games are all desktop software. Enterprise software\(^10\) refers to applications that address the needs of multiple, simultaneous users throughout an organization or work group. Most companies run various forms of enterprise software programs to keep track of their inventory, record sales, manage payments to suppliers, cut employee paychecks, and handle other functions. Some firms write their own enterprise software from scratch, but this can be time consuming and costly. Since many firms have similar procedures for accounting, finance, inventory management, and human resource functions, it often makes sense to buy a software package\(^11\) (a software product offered commercially by a third party) to support some of these functions. So-called enterprise resource planning (ERP)\(^12\) software packages serve precisely this purpose. In the way that Microsoft can sell you a suite of desktop software programs that work together, many companies sell ERP software that coordinates and integrates many of the... functions of a business. The leading ERP vendors include the firm’s SAP and Oracle, although there are many firms that sell ERP software. A company doesn’t have to install all of the modules of an ERP suite, but it might add functions over time—for example, to plug in an accounting program that is able to read data from the firm’s previously installed inventory management system. And although a bit more of a challenge to integrate, a firm can also mix and match components, linking software the firm has written with modules purchased from different enterprise software vendors. Figure 9.4 ERP in Action An ERP system with multiple modules installed can touch many functions of the business: - **Sales**—A sales rep from Vermont-based SnowboardCo. takes an order for five thousand boards from a French sporting goods chain. The system can verify credit history, apply discounts, calculate price (in euros), and print the order in French. - **Inventory**—While the sales rep is on the phone with his French customer, the system immediately checks product availability, signaling that one thousand boards are ready to be shipped from the firm’s Burlington warehouse, the other four thousand need to be manufactured and can be delivered in two weeks from the firm’s manufacturing facility in Guangzhou. - **Manufacturing**—When the customer confirms the order, the system notifies the Guangzhou factory to ramp up production for the model ordered. - **Human Resources**—High demand across this week’s orders triggers a notice to the Guangzhou hiring manager, notifying her that the firm’s products are a hit and that the flood of orders coming in globally mean her factory will have to hire five more workers to keep up. - **Purchasing**—The system keeps track of raw material inventories, too. New orders trigger an automatic order with SnowboardCo.’s suppliers, so that raw materials are on hand to meet demand. - **Order Tracking**—The French customer can log in to track her SnowboardCo. order. The system shows her other products that are available, using this as an opportunity to cross-sell additional products. - **Decision Support**—Management sees the firm’s European business is booming and plans a marketing blitz for the continent, targeting board models and styles that seem to sell better for the Alps crowd than in the U.S. market. Other categories of enterprise software that managers are likely to encounter include the following: - **customer relationship management (CRM)**\(^{13}\) systems used to support customer-related sales and marketing activities - **supply chain management (SCM)**\(^{14}\) systems that can help a firm manage aspects of its value chain, from the flow of raw materials into the firm through delivery of finished products and services at the point-of-consumption - **business intelligence (BI) systems**\(^{15}\), which use data created by other systems to provide reporting and analysis for organizational decision making Major ERP vendors are now providing products that extend into these and other categories of enterprise application software, as well. Most enterprise software works in conjunction with a **database management system (DBMS)**\(^{16}\), sometimes referred to as a “database system.” The database system stores and retrieves the data that an application creates and uses. Think of this as another additional layer in our cake analogy. Although the DBMS is itself considered an application, it’s often useful to think of a firm’s database systems as sitting above the operating system, but under the enterprise applications. Many ERP systems and enterprise software programs are configured to share the same database system so that an organization’s different programs can use a common, shared set of data. This system can be hugely valuable for a company’s efficiency. For example, this could allow a separate set of programs that manage an inventory and point-of-sale system to update a single set of data that tells how many products a firm has to sell and how many it has already sold—information that would also be used by the firm’s accounting and finance systems to create reports showing the firm’s sales and profits. Firms that don’t have common database systems with consistent formats across their enterprise often struggle to efficiently manage their value chain. Common procedures and data formats created by packaged ERP systems and other categories of enterprise software also make it easier for firms to use software to coordinate programs between organizations. This coordination can lead to even more value chain efficiencies. Sell a product? Deduct it from your inventory. When inventory levels get too low, have your computer systems send a message to your supplier’s systems so that they can automatically build and ship replacement product to your firm. In many cases these messages are sent without any human interaction, reducing time and errors. And common database systems also facilitate the use of BI systems that provide critical operational and competitive knowledge and empower --- 13. Systems used to support customer-related sales and marketing activities. 14. Systems that can help a firm manage aspects of its value chain, from the flow of raw materials into the firm, through delivery of finished products and services at the point-of-consumption. 15. Systems that use data created by other systems to provide reporting and analysis for organizational decision making. 16. Sometimes referred to as database software; software for creating, maintaining, and manipulating data. decision making. For more on CRM and BI systems, and the empowering role of data, see Chapter 11 "The Data Asset: Databases, Business Intelligence, and Competitive Advantage". Figure 9.5 An organization's database management system can be set up to work with several applications both within and outside the firm. The Rewards and Risks of Packaged Enterprise Systems When set up properly, enterprise systems can save millions of dollars and turbocharge organizations. For example, the CIO of office equipment maker Steelcase credited the firm’s ERP with an eighty-million-dollar reduction in operating expenses saved from eliminating redundant processes and making data more usable. The CIO of Colgate Palmolive also praised their ERP, saying, “The day we turned the switch on, we dropped two days out of our order-to-delivery cycle.” A. Robinson and D. Dilts, “OR and ERP,” ORMS Today, June 1999. Packaged enterprise systems can streamline processes, make data more usable, and ease the linking of systems with software across the firm and with key business partners. Plus, the software that makes up these systems is often debugged, tested, and documented with an industrial rigor that may be difficult to match with proprietary software developed in-house. But for all the promise of packaged solutions for standard business functions, enterprise software installations have proven difficult. Standardizing business processes in software that others can buy means that those functions are easy for competitors to match, and the vision of a single monolithic system that delivers up wondrous efficiencies has been difficult for many to achieve. The average large company spends roughly $15 million on ERP software, with some installations running into the hundreds of millions of dollars. C. Rettig, “The Trouble with Enterprise Software,” MIT Sloan Management Review 49, no. 1 (2007): 21–27. And many of these efforts have failed disastrously. FoxMeyer was once a six-billion-dollar drug distributor, but a failed ERP installation led to a series of losses that bankrupted the firm. The collapse was so rapid and so complete that just a year after launching the system, the carcass of what remained of the firm was sold to a rival for less than $80 million. Hershey Foods blamed a $466 million revenue shortfall on glitches in the firm’s ERP rollout. Among the problems, the botched implementation prevented the candy maker from getting product to stores during the critical period before Halloween. Nike’s first SCM and ERP implementation was labeled a “disaster”; their systems were blamed for over $100 million in lost sales. C. Koch, “Nike Rebounds: How (and Why) Nike Recovered from Its Supply Chain Disaster,” CIO, June 15, 2004. Even tech firms aren’t immune to software implementation blunders. HP once blamed a $160 million loss on problems with its ERP systems. R. Charette, “Why Software Fails,” IEEE Spectrum, September 2005. Manager beware—there are no silver bullets. For insight on the causes of massive software failures, and methods to improve the likelihood of success, Application software focuses on the work of a user or an organization. Desktop applications are typically designed for a single user. Enterprise software supports multiple users in an organization or work group. Popular categories of enterprise software include ERP (enterprise resource planning), SCM (supply chain management), CRM (customer relationship management), and BI (business intelligence) software, among many others. These systems are used in conjunction with database management systems, programs that help firms organize, store, retrieve, and maintain data. ERP and other packaged enterprise systems can be challenging and costly to implement, but can help firms create a standard set of procedures and data that can ultimately lower costs and streamline operations. The more application software that is available for a platform, the more valuable that platform becomes. The DBMS stores and retrieves the data used by the other enterprise applications. Different enterprise systems can be configured to share the same database system in order share common data. Firms that don’t have common database systems with consistent formats across their enterprise often struggle to efficiently manage their value chain, and often lack the flexibility to introduce new ways of doing business. Firms with common database systems and standards often benefit from increased organizational insight and decision-making capabilities. Enterprise systems can cost millions of dollars in software, hardware, development, and consulting fees, and many firms have failed when attempting large-scale enterprise system integration. Simply buying a system does not guarantee its effective deployment and use. When set up properly, enterprise systems can save millions of dollars and turbocharge organizations by streamlining processes, making data more usable, and easing the linking of systems with software across the firm and with key business partners. QUESTIONS AND EXERCISES 1. What is the difference between desktop and enterprise software? 2. Who are the two leading ERP vendors? 3. List the functions of a business that might be impacted by an ERP. 4. What do the acronyms ERP, CRM, SCM, and BI stand for? Briefly describe what each of these enterprise systems does. 5. Where in the “layer cake” analogy does the DBMS lie. 6. Name two companies that have realized multimillion-dollar benefits as result of installing enterprise systems. 7. Name two companies that have suffered multimillion-dollar disasters as result of failed enterprise system installations. 8. How much does the average large company spend annually on ERP software? 9.4 Distributed Computing **LEARNING OBJECTIVES** 1. Understand the concept of distributed computing and its benefits. 2. Understand the client-server model of distributed computing. 3. Know what Web services are and the benefits that Web services bring to firms. 4. Appreciate the importance of messaging standards and understand how sending messages between machines can speed processes, cut costs, reduce errors, and enable new ways of doing business. When computers in different locations can communicate with one another, this is often referred to as **distributed computing**. Distributed computing can yield enormous efficiencies in speed, error reduction, and cost savings and can create entirely new ways of doing business. Designing systems architecture for distributed systems involves many advanced technical topics. Rather than provide an exhaustive decomposition of distributed computing, the examples that follow are meant to help managers understand the bigger ideas behind some of the terms that they are likely to encounter. Let’s start with the term **server**. This is a tricky one because it’s frequently used in two ways: (1) in a hardware context a server is a computer that has been configured to support requests from other computers (e.g., Dell sells servers) and (2) in a software context a server is a program that fulfills requests (e.g., the Apache open source Web server). Most of the time, server **software** resides on server-class **hardware**, but you can also set up a PC, laptop, or other small computer to run server software, albeit less powerfully. And you can use mainframe or super-computer-class machines as servers, too. The World Wide Web, like many other distributed computing services, is what geeks call a **client-server** system. Client-server refers to two pieces of software, a **client** that makes a request, and a server that receives and attempts to fulfill the request. In our WWW scenario, the client is the browser (e.g., Internet Explorer, Firefox, Safari). When you type a Web site’s address into the location field of your browser, you’re telling the client to “go find the Web server software at the address provided, and tell the server to return the Web site requested.” --- 17. A form of computing where systems in different locations communicate and collaborate to complete a task. 18. A program that fulfills the requests of a client. 19. A software program that makes requests of a server program. It is possible to link simple scripting languages to a Web server for performing calculations, accessing databases, or customizing Web sites. But more advanced distributed environments may use a category of software called an application server \(^{20}\). The application server (or app server) houses business logic for a distributed system. Individual Web services \(^{21}\) served up by the app server are programmed to perform different tasks: returning a calculation (“sales tax for your order will be $11.58”), accessing a database program (“here are the results you searched for”), or even making a request to another server in another organization (“Visa, please verify this customer’s credit card number for me”). Figure 9.6 In this multitiered distributed system, client browsers on various machines (desktop, laptop, mobile) access the system through the Web server. The cash register doesn’t use a Web browser, so instead the cash register logic is programmed to directly access the services it needs from the app server. Web services accessed from the app server may be asked to do a variety of functions, including perform calculations, access corporate databases, or even make requests from servers at other firms (for example, to verify a customer’s credit card). Those little chunks of code that are accessed via the application server are sometimes referred to as Web services. The World Wide Web consortium defines Web services as software systems designed to support interoperable machine-to-machine interaction over a network. W3C, “Web Services Architecture,” W3C Working Group Note, February 11, 2004. And when computers can talk together (instead of people), this often results in fewer errors, time savings, cost reductions, and can even create whole new ways of doing business! Each Web service defines the standard method for other programs to request it to perform a task and defines the kind of response the calling client can expect back. These standards are referred to as application programming interfaces (APIs) \(^{22}\). Look at the advantages that Web services bring a firm like Amazon. Using Web services, the firm can allow the same order entry logic to be used by Web browsers, mobile phone applications, or even by third parties who want to access Amazon product information and place orders with the firm (there’s an incentive to funnel sales to Amazon—the firm will give you a cut of any sales that you send Amazon’s way). Organizations that have created a robust set of Web services around their processes and procedures are said to have a service-oriented architecture (SOA) \(^{23}\). Organizing systems like this, with separate applications in charge of client presentation, business logic, and database, makes systems more flexible. Code can be reused, and each layer can be separately maintained, upgraded, or migrated to new hardware—all with little impact on the others. --- 20. Software that houses and serves business logic for use (and reuse) by multiple applications. 21. Small pieces of code that are accessed via the application server which permit interoperable machine-to-machine interaction over a network. 22. Programming hooks, or guidelines, published by firms that tell other programs how to get a service to perform a task such as send or receive data. For example, Amazon.com provides APIs to let developers write their own applications and Websites that can send the firm orders. 23. A robust set of Web services built around an organizations processes and procedures. Web services sound geeky, but here’s a concrete example illustrating their power. Southwest Airlines had a Web site where customers could book flights, but many customers also wanted to rent a car or book a hotel, too. To keep customers on Southwest.com, the firm and its hotel and rental car partners created a set of Web services and shared the APIs. Now customers visiting Southwest.com can book a hotel stay and rental car on the same page where they make their flight reservation. This process transforms Southwest.com into a full service travel destination and allows the site to compete head-to-head with the likes of Expedia, Travelocity, and Orbitz. J. McCarthy, “The Standards Body Politic,” *InfoWorld*, May 17, 2002. Think about why Web services are important from a strategic perspective. By adding hotel and rental car services, Southwest is now able to eliminate the travel agent, along with any fees they might share with the agent. This shortcut allows the firm to capture more profits or pass on savings to customers, securing its position as the first place customers go for low-cost travel. And perhaps most importantly, Southwest can capture key data from visitor travel searches and bookings (something it likely couldn’t do if customers went to a site like Expedia or Travelocity). Data is a hugely valuable asset, and this kind of customer data can be used by Southwest to send out custom e-mail messages and other marketing campaigns to bring customers back to the airline. As geeky as they might at first seem, Web services can be very strategic! *Figure 9.7* Southwest.com uses Web services to allow car rental and hotel firms to book services through Southwest. This process transforms Southwest.com into a full-service online travel agent. ### Formats to Facilitate Sharing Data Two additional terms you might hear within the context of distributed computing are EDI and XML. **EDI (electronic data interchange)** is a set of standards for exchanging information between computer applications. EDI is most often used as a way to send the electronic equivalent of structured documents between different organizations. Using EDI, each element in the electronic document, such as a firm name, address, or customer number, is coded so that it can be recognized by the receiving computer program. Eliminating paper documents makes businesses faster and lowers data entry and error costs. One study showed that firms that used EDI decreased their error rates by 82 percent, and their cost of producing each document fell by up to 96 percent. “Petroleum Industry Continues to Explore EDI,” *National Petroleum News* 90, no. 12 (November 1998). --- 24. A set of standards for exchanging messages containing formatted data between computer applications. EDI is a very old standard, with roots stretching back to the 1948 Berlin Air Lift. While still in use, a new generation of more-flexible technologies for specifying data standards are taking its place. Chief among the technologies replacing EDI is extensible markup language (XML)\textsuperscript{25}. XML has lots of uses, but in the context of distributed systems, it allows software developers to create a set of standards for common data elements that, like EDI messages, can be sent between different kinds of computers, different applications, and different organizations. XML is often thought of as easier to code than EDI, and it's more robust because it can be extended—organizations can create formats to represent any kind of data (e.g., a common part number, photos, the complaint field collected by customer support personnel). In fact, most messages sent between Web services are coded in XML (the technology is a key enabler in mash-ups, discussed in Chapter 7 "Social Media, Peer Production, and Web 2.0"). Many computer programs also use XML as a way to export and import data in a common format that can be used regardless of the kind of computer hardware, operating system, or application program used. And if you design Web sites, you might encounter XML as part of the coding behind the cascading style sheets (CSS) that help maintain a consistent look and feel to the various Web pages in a given Web site. \textsuperscript{25} A tagging language that can be used to identify data fields made available for use by other applications. Most APIs and Web services send messages where the data exchanged is wrapped in identifying XML tags. Rearden Commerce: A Business Built on Web Services Web services, APIs, and open standards not only transform businesses, they can create entire new firms that change how we get things done. For a look at the mashed-up, integrated, hyperautomated possibilities that Web services make possible, check out Rearden Commerce, a Foster City, California, firm that is using this technology to become what AMR’s Chief Research Office referred to as “Travelocity on Steroids.” Using Rearden, firms can offer their busy employees a sort of Web-based concierge/personal assistant. Rearden offers firms a one-stop shop where employees can not only make the flight, car, and hotel bookings they might do from a travel agent, they can also book dinner reservations, sports and theatre tickets, and arrange for business services like conference calls and package shipping. Rearden doesn’t supply the goods and services it sells. Instead it acts as the middleman between transactions. A set of open APIs to its Web services allows Rearden’s one hundred and sixty thousand suppliers to send product and service data to Rearden, and to receive booking and sales data from the site. In this ultimate business mash-up, a mobile Rearden user could use her phone to book a flight into a client city, see restaurants within a certain distance of her client’s office, have these locations pop up on a Google map, have listings accompanied by Zagat ratings and cuisine type, book restaurant reservations through Open Table, arrange for a car and driver to meet her at her client’s office at a specific time, and sync up these reservations with her firm’s corporate calendaring systems. If something unexpected comes up, like a flight delay, Rearden will be sure she gets the message. The system will keep track of any cancelled reservation credits, and also records travel reward programs, so Rearden can be used to spend those points in the future. In order to pull off this effort, the Rearden maestros are not only skilled at technical orchestration, but also in coordinating customer and supplier requirements. As TechCrunch’s Erick Schonfeld put it, “The hard part is not only the technology—which is all about integrating an unruly mess of APIs and Web services—[it also involves] signing commercially binding service level agreements with [now over 160,000] merchants across the world.” For its efforts, Rearden gets to keep between 6 percent and 25 percent of every nontravel dollar spent, depending on the service. The firm also makes money from subscriptions, and distribution deals. The firm’s first customers were large businesses and included ConAgra, GlaxoSmithKline, and Motorola. Rearden’s customers can configure the system around special parameters unique to each firm: to favor a specific airline, benefit from a corporate discount, or to restrict some offerings for approved employees only. Rearden investors include JPMorgan Chase and American Express—both of whom offer Rearden to their employees and customers. Even before the consumer version was available, Rearden had over four thousand corporate customers and two million total users, a user base larger than better-known firms like Salesforce.com. M. Arrington, “Rearden Commerce: Time for the Adults to Come In and Clean House,” TechCrunch, April 5, 2007; E. Schonfeld, “At Rearden Commerce, Addiction Is Job One,” TechCrunch, May 6, 2008; and M. Arrington, “2008: Rearden Commerce Has a Heck of a Year,” TechCrunch, January 13, 2009. For all the pizzazz we recognize that, as a start-up, the future of Rearden Commerce remains uncertain; however, the firm’s effective use of Web services illustrates the business possibilities as technologies allow firms to connect with greater ease and efficiency. Connectivity has made our systems more productive and enables entire new strategies and business models. But these wonderful benefits come at the price of increased risk. When systems are more interconnected, opportunities for infiltration and abuse also increase. Think of it this way—each “connection” opportunity is like adding another door to a building. The more doors that have to be defended, the more difficult security becomes. It should be no surprise that the rise of the Internet and distributed computing has led to an explosion in security losses by organizations worldwide. KEY TAKEAWAYS • Client-server computing is a method of distributed computing where one program (a client) makes a request to be fulfilled by another program (a server). • Server is a tricky term and is sometimes used to refer to hardware. While server-class hardware refers to more powerful computers designed to support multiple users, just about any PC or notebook can be configured to run server software. • Web servers serve up Web sites and can perform some scripting. • Most firms serve complex business logic from an application server. • Isolating a system’s logic in three or more layers (presentation or user interface, business logic, and database) can allow a firm flexibility in maintenance, reusability, and in handling upgrades. • Web services allow different applications to communicate with one another. APIs define the method to call a Web service (e.g., to get it to do something), and the kind of response the calling program can expect back. • Web services make it easier to link applications as distributed systems, and can make it easier for firms to link their systems across organizations. • Popular messaging standards include EDI (older) and XML. Sending messages between machines instead of physical documents can speed processes, drastically cut the cost of transactions, and reduce errors. • Distributed computing can yield enormous efficiencies in speed, error reduction, and cost savings and can create entirely new ways of doing business. • When computers can communicate with each other (instead of people), this often results in fewer errors, time savings, cost reductions, and can even create whole new ways of doing business. • Web services, APIs, and open standards not only transform businesses, they can create entire new firms that change how we get things done. 1. Differentiate the term “server” used in a hardware context, from “server” used in a software context. 2. Describe the “client-server” model of distributed computing. What products that you use would classify as leveraging client-server computing? 3. List the advantages that Web services have brought to Amazon. 4. How has Southwest Airlines utilized Web services to its competitive advantage? 5. What is Rearden Commerce and which technologies does it employ? Describe Rearden Technology’s revenue model. Who were Rearden Technology’s first customers? Who were among their first investors? 6. What are the security risks associated with connectivity, the Internet, and distributed processing? LEARNING OBJECTIVES 1. Understand, at a managerial level, what programming languages are and how software is developed. 2. Recognize that an operating system and microprocessor constrain the platform upon which most compiled application software will run. 3. Understand what Java is and why it is significant. 4. Know what scripting languages are. So you’ve got a great idea that you want to express in software—how do you go about creating a program? Programmers write software in a **programming language**. While each language has its strengths and weaknesses, most commercial software is written in C++ (pronounced “see plus plus”) or C# (pronounced “see sharp”). Visual Basic (from Microsoft) and Java (from Sun) are also among the more popular of the dozens of programming languages available. Web developers may favor specialty languages like Ruby and Python, while languages like SQL are used in databases. Most professional programmers use an **integrated development environment (IDE)** to write their code. The IDE includes a text editor, a debugger for sleuthing out errors, and other useful programming tools. The most popular IDE for Windows is Visual Studio, while Apple offers the Xcode IDE. Most IDEs can support several different programming languages. The IDE will also **compile** a programmer’s code, turning the higher-level lines of instructions that are readable by humans into lower-level instructions expressed as the patterns of ones and zeros that are readable by a computer’s microprocessor. Look at the side of a box of commercial software and you’re likely to see system requirements that specify the operating system and processor that the software is designed for (e.g., “this software works on computers with Windows 7 and Intel-compatible processors”). Wouldn’t it be great if software could be written once and run everywhere? That’s the idea behind Java—a programming language developed by Sun Microsystems. Java programmers don’t write code with specific operating system commands (say for Windows, Mac OS X, or Linux), instead they use special Java commands to control their user interface or interact with the display and other hardware. Java programs can run on any computer that has a Java Virtual Machine (JVM), a software layer that interprets Java code so that it can be understood by the operating system and processor of a given computer. Java’s platform independence—the ability for developers to “write once, run everywhere”—is its biggest selling point. Many Web sites execute Java applets to run the animation you might see in advertisements or games. Java has also been deployed on over six billion mobile phones worldwide, and is popular among enterprise programmers who want to be sure their programs can scale from smaller hardware up to high-end supercomputers. As long as the machine receiving the Java code has a JVM, then the Java application should run. However, Java has not been popular for desktop applications. Since Java isn’t optimized to take advantage of interface elements specific to the Mac or Windows, most Java desktop applications look clunky and unnatural. Java code that runs through the JVM interpreter is also slower than code compiled for the native OS and processor that make up a platform. Some offerings have attempted to overcome the speed issues associated with interpreting Java code. Just-in-time compilation stores code in native processor-executable form after each segment is initially interpreted, further helping to speed execution. Other environments allow for Java to be compiled ahead of time so that it can be directly executed by a microprocessor. However, this process eliminates code portability—Java’s key selling point. And developers preparing their code for the JVM actually precompile code into something called Java bytecode, a format that’s less human friendly but more quickly interpreted by JVM software. Scripting languages are the final category of programming tool that we’ll cover. Scripting languages typically execute within an application. Microsoft offers a scripting language called VB Script (a derivative of Visual Basic) to automate functions in Office. And most browsers and Web servers support JavaScript, a language that helps make the Web more interactive (despite its name, JavaScript is unrelated to Java). Scripting languages are interpreted within their applications, rather than compiled to run directly by a microprocessor. This distinction makes them slower than the kinds of development efforts found in most commercial software. But most scripting languages are usually easy to use, and are often used both by professional programmers and power users. ### KEY TAKEAWAYS - Programs are often written in a tool called an IDE, an application that includes an editor (a sort of programmer’s word processor), debugger, and compiler, among other tools. - Compiling takes code from the high-level language that humans can understand and converts them into the sets of ones and zeros in patterns representing instructions that microprocessors understand. - Popular programming languages include C++, C#, Visual Basic, and Java. - Most software is written for a platform—a combination of an operating system and microprocessor. - Java is designed to be platform independent. Computers running Java have a separate layer called a Java Virtual Machine that translates (interprets) Java code so that it can be executed on an operating system/processor combination. In theory, Java is “write once, run everywhere,” as opposed to conventional applications that are written for an operating system and compiled for an OS/processor combination. - Java is popular on mobile phones, enterprise computing, and to make Web sites more interactive. Java has never been a successful replacement for desktop applications, largely because user interface differences among the various operating systems are too great to be easily standardized. - Scripting languages are interpreted languages, such as VB Script or JavaScript. Many scripting languages execute within an application (like the Office programs, a Web browser, or to support the functions of a Web server). They are usually easier to program, but are less powerful and execute more slowly than compiled languages. <table> <thead> <tr> <th>QUESTIONS AND EXERCISES</th> </tr> </thead> <tbody> <tr> <td>1. List popular programming languages.</td> </tr> <tr> <td>2. What’s an IDE? Why do programmers use IDEs? Name IDEs popular for Windows and Mac users.</td> </tr> <tr> <td>3. What is the difference between a compiled programming language and an interpreted programming language?</td> </tr> <tr> <td>4. Name one advantage and one disadvantage of scripting languages.</td> </tr> <tr> <td>5. In addition to computers, on what other technology has Java been deployed? Why do you suppose Java is particularly attractive for these kinds of applications?</td> </tr> <tr> <td>6. What’s a JVM? Why do you need it?</td> </tr> <tr> <td>7. What if a programmer wrote perfect Java code, but there was a bug on the JVM installed on a given computer? What might happen?</td> </tr> <tr> <td>8. Why would developers choose to write applications in Java? Why might they skip Java and choose another programming language?</td> </tr> <tr> <td>9. Why isn’t Java popular for desktop applications?</td> </tr> <tr> <td>10. Go to <a href="http://www.java.com">http://www.java.com</a>. Click on “Do I have Java?” Is Java running on your computer? Which version?</td> </tr> </tbody> </table> 9.6 Understanding Technology beyond the Price Tag: Total Cost of Ownership (TCO) and the Cost of Tech Failure **LEARNING OBJECTIVES** 1. List the different cost categories that comprise total cost of ownership. 2. Understand that once a system is implemented, the costs of maintaining and supporting the system continue. 3. List the reasons that technology development projects fail and the measures that can be taken to increase the probability of success. Managers should recognize that there are a whole host of costs that are associated with creating and supporting an organization’s information systems. Of course, there are programming costs for custom software as well as purchase, configuration, and licensing costs for packaged software, but there’s much, much more. There are costs associated with design and documentation (both for programmers and for users). There are also testing costs. New programs should be tested thoroughly across the various types of hardware the firm uses, and in conjunction with existing software and systems, before being deployed throughout the organization. Any errors that aren’t caught can slow down a business or lead to costly mistakes that could ripple throughout an organization and its partners. Studies have shown that errors not caught before deployment could be one hundred times more costly to correct than if they were detected and corrected beforehand. R. Charette, “Why Software Fails,” *IEEE Spectrum*, September 2005. Once a system is “turned on,” the work doesn’t end there. Firms need to constantly engage in a host of activities to support the system that may also include the following: - providing training and end user support - collecting and relaying comments for system improvements - auditing systems to ensure compliance\(^{32}\) (i.e., that the system operates within the firm’s legal constraints and industry obligations) - providing regular backup of critical data - planning for redundancy and disaster recovery in case of an outage - vigilantly managing the moving target of computer security issues --- 32. Ensuring that an organization’s systems operate within required legal constraints, and industry and organizational obligations With so much to do, it’s no wonder that firms spend 70 to 80 percent of their information systems (IS) budgets just to keep their systems running. C. Rettig, “The Trouble with Enterprise Software,” *MIT Sloan Management Review* 49, no. 1 (2007): 21–27. The price tag and complexity of these tasks can push some managers to think of technology as being a cost sink rather than a strategic resource. These tasks are often collectively referred to as the **total cost of ownership (TCO)** of an information system. Understanding TCO is critical when making technology investment decisions. TCO is also a major driving force behind the massive tech industry changes discussed in Chapter 10 "Software in Flux: Partly Cloudy and Sometimes Free". ### Why Do Technology Projects Fail? Even though information systems represent the largest portion of capital spending at most firms, an astonishing one in three technology development projects fail to be successfully deployed. L. Dignan, “Survey: One in 3 IT Projects Fail; Management OK with It,” *ZDNet*, December 11, 2007. Imagine if a firm lost its investment in one out of every three land purchases, or when building one in three factories. These statistics are dismal! Writing in *IEEE Spectrum*, risk consultant Robert Charette provides a sobering assessment of the cost of software failures, stating, “The yearly tab for failed and troubled software conservatively runs somewhere from $60 to $70 billion in the United States alone. For that money, you could launch the space shuttle one hundred times, build and deploy the entire 24-satellite Global Positioning System, and develop the Boeing 777 from scratch—and still have a few billion left over.” R. Charette, “Why Software Fails,” *IEEE Spectrum*, September 2005. Why such a bad track record? Sometimes technology itself is to blame, other times it’s a failure to test systems adequately, and sometimes it’s a breakdown of process and procedures used to set specifications and manage projects. In one example, a multimillion-dollar loss on the NASA Mars Observer was traced back to a laughably simple oversight—Lockheed Martin contractors using English measurements, while the folks at NASA used the metric system. R. Lloyd, “Metric Mishap Caused Loss of NASA Orbiter,” *CNN*, September 20, 1999. Yes, a $125 million taxpayer investment was lost because a bunch of rocket scientists failed to pay attention to third grade math. When it comes to the success or failure of technical projects, the devil really is in the details. Projects rarely fail for just one reason. Project post-mortems often point to a combination of technical, project management, and business decision blunders. The most common factors include the following: List largely based on R. Charette, “Why Software Fails,” *IEEE Spectrum*, September 2005. --- 33. All of the costs associated with the design, development, testing, implementation, documentation, training and maintenance of a software system. Managers need to understand the complexity involved in their technology investments, and that achieving success rarely lies with the strength of the technology alone. But there is hope. Information systems organizations can work to implement procedures to improve the overall quality of their development practices. Mechanisms for quality improvement include capability maturity model integration (CMMI), which gauge an organization’s process maturity and capability in areas critical to developing and deploying technology projects, and provides a carefully chosen set of best practices and guidelines to assist quality and process improvement. R. Kay, “QuickStudy: Capability Maturity Model Integration (CMMI),” Computerworld, January 24, 2005; and Carnegie Mellon Software Engineering Institute, Welcome to CMMI, 2009, http://www.sei.cmu.edu/cmmi. Firms are also well served to leverage established project planning and software development methodologies that outline critical businesses processes and stages when executing large-scale software development projects. The idea behind these methodologies is straightforward—why reinvent the wheel when there is an opportunity to learn from and follow blueprints used by those who have executed successful efforts. When methodologies are applied to projects that are framed with clear business goals and business metrics, and that engage committed executive leadership, success rates can improve dramatically. A. Shenhar and D. Dvir, Reinventing Project Management: The Diamond Approach to Successful Growth and Innovation (Boston: Harvard Business School Press, 2007). 34. A process-improvement approach (useful for but not limited to software engineering projects) that can assist in assessing the maturity, quality, and development of certain organizational business processes, and suggest steps for their improvement. While software development methodologies are the topic of more advanced technology courses, the savvy manager knows enough to inquire about the development methodologies and quality programs used to support large scale development projects, and can use these investigations as further input when evaluating whether those overseeing large scale efforts have what it takes to get the job done. **KEY TAKEAWAYS** - The care and feeding of information systems can be complex and expensive. The total cost of ownership of systems can include software development and documentation, or the purchase price and ongoing license and support fees, plus configuration, testing, deployment, maintenance, support, training, compliance auditing, security, backup, and provisions for disaster recovery. These costs are collectively referred to as TCO, or a system’s total cost of ownership. - Information systems development projects fail at a startlingly high rate. Failure reasons can stem from any combination of technical, process, and managerial decisions. - IS organizations can leverage software development methodologies to improve their systems development procedures, and firms can strive to improve the overall level of procedures used in the organization through models like CMMI. However, it’s also critical to engage committed executive leadership in projects, and to frame projects using business metrics and outcomes to improve the chance of success. - System errors that aren’t caught before deployment can slow down a business or lead to costly mistakes that could ripple throughout an organization. Studies have shown that errors not caught before deployment could be 100 times more costly to correct than if they were detected and corrected beforehand. - Firms spend 70 to 80 percent of their IS budgets just to keep their systems running. - One in three technology development projects fail to be successfully deployed. - IS organizations can employ project planning and software development methodologies to implement procedures to improve the overall quality of their development practices. # QUESTIONS AND EXERCISES 1. List the types of total ownership costs associated with creating and supporting an organization’s information systems. 2. On average, what percent of firms’ IS budgets is spent to keep their systems running? 3. What are the possible effects of not detecting and fixing major system errors before deployment? 4. List some of the reasons for the failure of technology development projects. 5. What is the estimated yearly cost of failed technology development projects? 6. What was the reason attributed to the failure of the NASA Mars Observer project? 7. What is capability maturity model integration (CMMI) and how is it used to improve the overall quality of a firm’s development practices? 8. Perform an Internet search for “IBM Rational Portfolio Manager.” How might IBM’s Rational Portfolio Manager software help companies realize more benefit from their IT systems development project expenditures? What competing versions of this product offered by other organizations?
{"Source-Url": "http://jsmith.cis.byuh.edu/pdfs/getting-the-most-out-of-information-systems-v2.0/s13-understanding-software-a-prime.pdf", "len_cl100k_base": 13194, "olmocr-version": "0.1.53", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 69295, "total-output-tokens": 14699, "length": "2e13", "weborganizer": {"__label__adult": 0.0005702972412109375, "__label__art_design": 0.0006999969482421875, "__label__crime_law": 0.0005354881286621094, "__label__education_jobs": 0.04486083984375, "__label__entertainment": 0.0002608299255371094, "__label__fashion_beauty": 0.00024509429931640625, "__label__finance_business": 0.01568603515625, "__label__food_dining": 0.0005669593811035156, "__label__games": 0.0016078948974609375, "__label__hardware": 0.0015544891357421875, "__label__health": 0.0005521774291992188, "__label__history": 0.0003440380096435547, "__label__home_hobbies": 0.0003573894500732422, "__label__industrial": 0.0007162094116210938, "__label__literature": 0.0008225440979003906, "__label__politics": 0.00026035308837890625, "__label__religion": 0.0005125999450683594, "__label__science_tech": 0.01555633544921875, "__label__social_life": 0.0003325939178466797, "__label__software": 0.06298828125, "__label__software_dev": 0.849609375, "__label__sports_fitness": 0.0004413127899169922, "__label__transportation": 0.0006546974182128906, "__label__travel": 0.0003767013549804687}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67401, 0.01158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67401, 0.65973]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67401, 0.94034]], "google_gemma-3-12b-it_contains_pii": [[0, 1310, false], [1310, 1367, null], [1367, 3684, null], [3684, 5958, null], [5958, 7682, null], [7682, 10020, null], [10020, 12229, null], [12229, 15080, null], [15080, 16134, null], [16134, 18665, null], [18665, 18762, null], [18762, 21135, null], [21135, 23592, null], [23592, 26826, null], [26826, 27142, null], [27142, 29922, null], [29922, 31871, null], [31871, 32560, null], [32560, 35035, null], [35035, 38580, null], [38580, 41361, null], [41361, 43021, null], [43021, 45474, null], [45474, 47371, null], [47371, 49186, null], [49186, 49888, null], [49888, 51739, null], [51739, 54578, null], [54578, 56177, null], [56177, 57217, null], [57217, 59434, null], [59434, 62420, null], [62420, 64295, null], [64295, 66395, null], [66395, 67401, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1310, true], [1310, 1367, null], [1367, 3684, null], [3684, 5958, null], [5958, 7682, null], [7682, 10020, null], [10020, 12229, null], [12229, 15080, null], [15080, 16134, null], [16134, 18665, null], [18665, 18762, null], [18762, 21135, null], [21135, 23592, null], [23592, 26826, null], [26826, 27142, null], [27142, 29922, null], [29922, 31871, null], [31871, 32560, null], [32560, 35035, null], [35035, 38580, null], [38580, 41361, null], [41361, 43021, null], [43021, 45474, null], [45474, 47371, null], [47371, 49186, null], [49186, 49888, null], [49888, 51739, null], [51739, 54578, null], [54578, 56177, null], [56177, 57217, null], [57217, 59434, null], [59434, 62420, null], [62420, 64295, null], [64295, 66395, null], [66395, 67401, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67401, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 67401, null]], "pdf_page_numbers": [[0, 1310, 1], [1310, 1367, 2], [1367, 3684, 3], [3684, 5958, 4], [5958, 7682, 5], [7682, 10020, 6], [10020, 12229, 7], [12229, 15080, 8], [15080, 16134, 9], [16134, 18665, 10], [18665, 18762, 11], [18762, 21135, 12], [21135, 23592, 13], [23592, 26826, 14], [26826, 27142, 15], [27142, 29922, 16], [29922, 31871, 17], [31871, 32560, 18], [32560, 35035, 19], [35035, 38580, 20], [38580, 41361, 21], [41361, 43021, 22], [43021, 45474, 23], [45474, 47371, 24], [47371, 49186, 25], [49186, 49888, 26], [49888, 51739, 27], [51739, 54578, 28], [54578, 56177, 29], [56177, 57217, 30], [57217, 59434, 31], [59434, 62420, 32], [62420, 64295, 33], [64295, 66395, 34], [66395, 67401, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67401, 0.04478]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
091b0e7be22e03c74e818dcae10e0301684a5bff
IBM Research Report BTRFS: The Linux B-tree Filesystem Ohad Rodeh IBM Research Division Almaden Research Center 650 Harry Road San Jose, CA 95120-6099 USA Josef Bacik, Chris Mason FusionIO BTRFS: The Linux B-tree Filesystem Ohad Rodeh Josef Bacik Chris Mason IBM FusionIO FusionIO Abstract BTRFS is a Linux filesystem, headed towards mainline default status. It is based on copy-on-write, allowing for efficient snapshots and clones. It uses b-trees as its main on-disk data-structure. The design goal is to work well for many use cases and workloads. To this end, much effort has been directed to maintaining even performance as the filesystem ages, rather than trying to support a particular narrow benchmark use case. A Linux filesystem is installed on smartphones as well as enterprise servers. This entails challenges on many different fronts. - Scalability: The filesystem must scale in many dimensions: disk space, memory, and CPUs. - Data integrity: Losing data is not an option, and much effort is expended to safeguard the content. This includes checksums, metadata duplication, and RAID support built into the filesystem. - Disk diversity: the system should work well with SSDs and hard-disks. It is also expected to be able to use an array of different sized disks; posing challenges to the RAID and striping mechanisms. This paper describes the core ideas, data-structures, and algorithms of this filesystem. It sheds light on the challenges posed by defragmentation in the presence of snapshots, and the tradeoffs required to maintain even performance in the face of a wide spectrum of workloads. 1 Introduction BTRFS is an open source filesystem that has seen extensive development since its inception in 2007. It is jointly developed by Fujitsu\textsuperscript{TM}, Fusion\textsuperscript{TM}, Intel\textsuperscript{TM}, Oracle\textsuperscript{TM}, Red Hat\textsuperscript{TM}, Strato\textsuperscript{TM}, SUSE\textsuperscript{TM}, and many others. It is slated to become the next major Linux filesystem. Its main features are: 1. CRCs maintained for all metadata and data 2. Efficient writeable snapshots, clones as first class citizens 3. Multi-device support 4. Online resize and defragmentation 5. Compression 6. Efficient storage for small files 7. SSD optimizations and TRIM support The design goal is to work well for a wide variety of workloads, and to maintain performance as the filesystem ages. This is in contrast to storage systems aimed at a particular narrow use case. BTRFS is intended to serve as the default Linux filesystem; it is expected to work well on systems as small as a smartphone, and as large as an enterprise production server. As such, it must work well on a wide range of hardware. The filesystem on disk layout is a forest of b-trees, with copy-on-write (COW) as the update method. Disk blocks are managed in extents, with checksumming for integrity, and reference counting for space reclamation. BTRFS is unique among filesystems in its use of COW friendly b-trees [14] and reference counting. Filesystem performance relies on the availability of long contiguous extents. However, as the system ages, space becomes increasingly fragmented, requiring online defragmentation. Due to snapshots, disk extents are potentially pointed to by multiple filesystem volumes. This make defragmentation challenging because (1) extents can only be moved after all source pointers are updated and (2) file contiguity is desirable for all snapshots. To make good use of modern CPUs, good concurrency is important. However, with copy-on-write this is difficult, because all updates ripple up to the root of the filesystem. This paper shows how BTRFS addresses these challenges, and achieves good performance. Compared with conventional filesystems that update files in place, the main workload effect is to make writes more sequential, and reads more random. The approach taken in this paper is to explain the core concepts and intuitions through examples and diagrams. The reader interested in finer grain details can find the filesystem code publicly available from the Linux kernel archives, and low level discussions in the kernel mailing list [1]. This paper is structured as follows: Section 2 describes related filesystems. Section 3 describes basic terminology, presents the b-trees used to hold metadata, and shows the fundamentals of copy-on-write updates. Section 4 is about the use of multiple devices, striping, mirroring, and RAID. Section 5 describes defragmentation, which is important for maintaining even filesystem performance. Section 6 talks about performance, and Section 7 summarizes. 2 Related work On Linux, there are three popular filesystems, Ext4 [17], XFS [5], and BTRFS [3]. In the class of copy-on-write filesystems, two important contemporary systems are ZFS [18], and WAFL [7, 11]. In what follows, we use the term overwrite based filesystem to refer to systems that update files in place. At the time of writing, this is the prevalent architectural choice. BTRFS development started in 2007, by C. Mason. He combined ideas from ReiserFS [8], with COW friendly b-trees suggested by O. Rodeh [14], to create a new Linux filesystem. Today, this project has many contributors, some of them from commercial companies, and it is on its way to becoming the default Linux filesystem. As development started in 2007, BTRFS is less mature and stable than others listed here. The Fourth Extended Filesystem (EXT4) is a mostly backward compatible extension to the previous general purpose Linux filesystem, Ext3. It was created to address filesystem and file size limitations, and improve performance. Initially, Linux kernel developers improved and modified Ext3 itself, however, in 2006, Ext4 was forked in order to segregate development and changes in an experimental branch. Today, Ext4 is the default Linux filesystem. As it is an in-place replacement for Ext3, older filesystems can seamlessly be upgraded. Ext4 is an overwrite based filesystem, that manages storage in extents. It uses an efficient tree-based index to represent files and directories. A write-ahead journal is used to ensure operation atomicity. Checksumming is performed on the journal, but not on user data, and snapshots are not supported. XFS is a filesystem originally developed by SGI. Development started in 1993, for the IRIX operating system. In 2000, it was ported to Linux, and made available on GNU/Linux distributions. The design goal of XFS is to achieve high scalability in terms of IO threads, number of disks, file/filesystem size. It is an overwrite class filesystem that uses B-tree of extents to manage disk space. A journal is used to ensure metadata operation atomicity. Snapshots are not supported; an underlying volume-manager is expected to support that operation. ZFS is a copy-on-write filesystem originally developed by SUN for its Solaris operating system. Development started in 2001, with the goal of replacing UFS, which was reaching its size limitations. ZFS was incorporated into Solaris in 2005. ZFS includes volume-manager functionality, protects data and metadata with checksums, and supports space-efficient snapshots. RAID5/6 is supported with RAID-Z, which has the interesting feature of always writing full stripes. Space is managed with variable sized blocks, which are powers of two; all space for a single file is allocated with one block size. In terms of features, ZFS is generally similar to BTRFS, however, the internal structures are quite different. For example, BTRFS manages space in extents, where ZFS uses blocks. BTRFS uses b-trees, where ZFS uses traditional indirect blocks. WAFL is the filesystem used in the NetApp\textsuperscript{TM} commercial file server; development started in the early 1990’s. It is a copy-on-write filesystem that is especially suited for NFS [2, 16] and CIFS [9] workloads. It uses NVRAM to store an operation log, and supports recovery up to the last acknowledged operation. This is important for supporting low-latency file write operations; NFS write semantics are that an operation is persistent once the client receives an acknowledgment from the server. WAFL manages space in 4KB blocks, and indexes files using a balanced tree structure. Snapshots are supported, as well as RAID. Free-space is managed using a form of bitmaps. WAFL is mature and feature rich filesystem. ReiserFS [8] is a general purpose Linux filesystem, which inspired some of the BTRFS architecture and design. It was built by Hans Reiser and a team of engineers at Namesys\textsuperscript{TM}. It was the first journaled filesystem to be included in the standard Linux kernel, and it was the default filesystem on many Linux distributions for a number of years. ReiserFS uses a single tree to hold the entire filesystem, instead of separate trees per file and directory. In order to reduce internal fragmentation, tail packing is implemented. The main idea is to pack the tail, the last partial block, of multiple files into a single block. 3 Fundamentals Filesystems support a wide range of operations and functionality. A full description of all the BTRFS options, use cases, and semantics would be prohibitively long. The focus of this work is to explain the core concepts, and we limit ourselves to the more basic filesystem operations: file create/delete/read/write, directory lookup/iteration, snapshots and clones. We also discuss data integrity, crash recovery, and RAID. Our description reflects the filesystem at the time of writing. The following terminology is used throughout: Page, block: a 4KB contiguous region on disk and in memory. This is the standard Linux page size. Extent: A contiguous on-disk area. It is page aligned, and its length is a multiple of pages. Copy-on-write (COW): creating a new version of an extent or a page at a different location. Normally, the data is loaded from disk to memory, modified, and then written elsewhere. The idea is not to update the original location in place, risking a power failure and partial update. 3.1 COW Friendly B-trees COW friendly b-trees are central to the BTRFS data-structure approach. For completeness, this section provides a recap of how they work. For a full account, the interested reader is referred to: [14, 12, 13, 15]. The main idea is to use standard b+-tree construction [6], but (1) employ a top-down update procedure, (2) remove leaf-chaining, (3) use lazy reference-counting for space management. For purposes of this discussion, we use trees with short integer keys, and no actual data items. The b-tree invariant is that a node can maintain 2 to 5 elements before being split or merged. Tree nodes are assumed to take up exactly one page. Unmodified pages are colored yellow, and COWed pages are colored green. Figure 1(a) shows an initial tree with two levels. Figure 1(b) shows an insert of new key 19 into the right most leaf. A path is traversed down the tree, and all modified pages are written to new locations, without modifying the old pages. In order to remove a key, copy-on-write is used. Remove operations do not modify pages in place. For example, Figure 2 shows how key 6 is removed from a tree. Modifications are written off to the side, creating a new version of the tree. In order to clone a tree, its root node is copied, and all the child pointers are duplicated. For example, Figure 3 shows a tree $T_p$, that is cloned to tree $T_q$. Tree nodes are denoted by symbols. As modifications will be applied to $T_q$, sharing will be lost between the trees, and each tree will have its own view of the data. Figure 3: Cloning tree \( T_p \). A new root \( Q \) is created, initially pointing to the same blocks as the original root \( P \). As modifications will be applied, the trees will diverge. Since tree nodes are reachable from multiple roots, garbage collection is needed for space reclamation. In practice, file systems are directed acyclic graphs (DAGs). There are multiple trees with shared nodes, but there are no cycles. Therefore, reference-counters (\textit{ref-counts}) can and are used to track how many pointers there are to tree nodes. Once the counter reaches zero, a block can be reused. In order to keep track of ref-counts, the copy-on-write mechanism is modified. Whenever a node is COWed, the ref-count for the original is decremented, and the ref-counts for the children are incremented. For example, Figure 4 shows the clone example with a ref-count indication. The convention is that pink nodes are unchanged except for their ref-count. Figure 4: Cloning tree \( T_p \). A new root \( Q \) is created, initially pointing to the same blocks as the original root \( P \). The ref-counts for the immediate children are incremented. The grandchildren remain untouched. Figure 5 shows an example of an insert-key into leaf \( H \), tree \( q \). The nodes on the path from \( Q \) to \( H \) are \( \{Q, C, H\} \). They are all modified and COWed. Figure 5: Inserting a key into node $H$ of tree $T_q$. The path from $Q$ to $H$ includes nodes \{Q, C, H\}, these are all COWed. Sharing is broken for nodes $C$ and $H$; the ref-count for $C$ is decremented. Figure 6 shows an example of a tree delete. The algorithm used is a recursive tree traversal, starting at the root. For each node $N$: - $\text{ref-count}(N) > 1$: Decrement the ref-count and stop downward traversal. The node is shared with other trees. - $\text{ref-count}(N) == 1$: It belongs only to $q$. Continue downward traversal and deallocate $N$. Figure 6: Deleting a tree rooted at node \( Q \). Nodes \{X, Z\}, reachable solely from \( Q \), are deallocated. Nodes \{C, Y\}, reachable also through \( P \), have their ref-count reduced from 2 to 1. ### 3.2 Filesystem B-tree The BTRFS b-tree is a generic data-structure that knows only about three types of data structures: keys, items, and block headers. The block header is fixed size and holds fields like checksums, flags, filesystem ids, generation number, etc. A key describes an object address using the structure: ```c struct key { u64: objectid; u8: type; u64 offset; } ``` An item is a key with additional offset and size fields: ```c struct item { struct key key; u32 offset; u32 size; } ``` Internal tree nodes hold only [key, block-pointer] pairs. Leaf nodes hold arrays of [item, data] pairs. Item data is variable sized. A leaf stores an array of items in the beginning, and a reverse sorted data array at the end. These arrays grow towards each other. For example Table 7 shows a leaf with three items \( \{I_0, I_1, I_2\} \) and three corresponding data elements \( \{D_2, D_1, D_0\} \). <table> <thead> <tr> <th>block header</th> <th>( I_0 )</th> <th>( I_1 )</th> <th>( I_2 )</th> <th>free space</th> <th>( D_2 )</th> <th>( D_1 )</th> <th>( D_0 )</th> </tr> </thead> </table> Figure 7: A leaf node with three items. The items are fixed size, but the data elements are variable sized. Item data is variably sized, and various filesystem data structures are defined as different types of item data. The type field in the key indicates the type of data stored in the item. The filesystem is composed of objects, each of which has an abstract 64bit object_id. When an object is created, a previously unused object_id is chosen for it. The object_id makes up the most significant bits of the key, allowing all of the items for a given filesystem object to be logically grouped together in the b-tree. The type field describes the kind of data held by an item; an object typically comprises several items. The offset field describes data held in an extent. Figure 8 shows a more detailed schematic of a leaf node. Figure 8: A detailed look at a generic leaf node holding keys and items. Inodes are stored in an inode item at offset zero in the key, and have a type value of one. Inode items are always the lowest valued key for a given object, and they store the traditional stat data for files and directories. The inode structure is relatively small, and will not contain embedded file data or extended attribute data. These things are stored in other item types. Small files that occupy less than one leaf block may be packed into the b-tree inside the extent item. In this case the key offset is the byte offset of the data in the file, and the size field of the item indicates how much data is stored. There may be more than one of these per file. Larger files are stored in extents. These are contiguous on-disk areas that hold user-data without additional headers or formatting. An extent-item records a generation number (explained below) for the extent and a [disk block, disk num blocks] pair to record the area of disk corresponding to the file. Extents also store the logical offset and the number of blocks used by this extent record into the extent on disk. This allows performing a rewrite into the middle of an extent without having to read the old file data first. For example writing 10MB into extent 0 - 64MB can cause the creation of three different extents: 0 - 10MB, 10-20MB, 20-64MB. Some filesystems use fixed size blocks instead of extents. Using an extent representation is much more space efficient, however, this comes at the cost of more complexity. A directory holds an array of dir_item elements. A dir_item maps a filename (string) to a 64bit object_id. The directory also contains two indexes, one used for lookup, the other for iteration. The lookup index is an array with pairs [dir_item key, filename 64bit hash], it is used for satisfying path lookups. The iteration index is an array with pairs [dir_item key, inode sequence number], it is used for bulk directory operations. The inode sequence number is stored in the directory, and is incremented every time a new file or directory is added. It approximates the on-disk order of the underlying file inodes, and thus saves disk seeks when accessing them. Bulk performance is important for operations like backups, copies, and filesystem validation. 3.3 A Forest A filesystem is constructed from a forest of trees. A superblock located at a fixed disk location is the anchor. It points to a tree of tree roots, which indexes the b-trees making up the filesystem. The trees are: **Sub-volumes:** store user visible files and directories. Each sub-volume is implemented by a separate tree. Sub-volumes can be snapshotted and cloned, creating additional b-trees. The roots of all sub-volumes are indexed by the tree of tree roots. **Extent allocation tree:** tracks allocated extents in extent items, and serves as an on-disk free-space map. All back-references to an ex- tent are recorded in the extent item. This allows moving an extent if needed, or recovering from a damaged disk block. Taken as a whole, back-references multiply the number of filesystem disk pointers by two. For each forward pointer, there is exactly one back-pointer. See more details on this tree below. **Checksum tree:** holds a checksum item per allocated extent. The item contains a list of checksums per page in the extent. **Chunk and device trees:** indirection layer for handling physical devices. Allows mirroring/striping and RAID. Section 4 shows how multiple device support is implemented using these trees. **Reloc tree:** for special operations involving moving extents. Section 5 describes how the reloc-tree is used for defragmentation. For example, Figure 9(a) shows a high-level view of the structure of a particular filesystem. The reloc and chunk trees are omitted for simplicity. Figure 9(b) shows the changes that occur after the user wrote to the filesystem. Figure 9: (a) A filesystem forest. (b) The changes that occur after modification; modified pages are colored green. Modifying user-visible files and directories causes page and extent updates. These ripple up the sub-volume tree until its root. Changes also occur to extent allocation, ref-counts, and back-pointers. These ripple through the extent tree. Data and metadata checksums change, these updates modify the checksum tree leaves, causing modifications to ripple up. All these tree modifications are captured at the top most level as a new root in the tree of tree roots. Modifications are accumulated in memory, and after a timeout, or enough pages have changed, are written in batch to new disk locations, forming a **checkpoint**. The default timeout is 30 seconds. Once the checkpoint has been written, the superblock is modified to point to the new checkpoint; this is the only case where a disk block is modified in place. If a crash occurs, the filesystem recovers by reading the superblock, and following the pointers to the last valid on-disk checkpoint. When a checkpoint is initiated, all dirty memory pages that are part of it are marked immutable. User updates received while the checkpoint is in flight cause immutable pages to be re-COWed. This allows user visible filesystem operations to proceed without damaging checkpoint integrity. Sub-volume trees can be snapshoted and cloned, and they are therefore ref-counted. All other trees keep meta-data per disk range, and they are never snapshoted. Reference counting is unnecessary for them. A filesystem update affects many on-disk structures. For example, a 4KB write into a file changes the file i-node, the file-extents, checksums, and back-references. Each of these changes causes an entire path to change in its respective tree. If users performed entirely random updates, this would be very expensive for the filesystem. Fortunately, user behavior normally has a lot of locality. If a file is updated, it would be updated with lots of new data; files in the same directory have a high likelihood of co-access. This allows coalescing modified paths in the trees. Nonetheless, worst cases are considered in the filesystem code. Tree structure is organized so that file operations normally modify single paths. Large scale operations are broken into parts, so that checkpoints never grow too large. Finally, special block reservations are used so that a checkpoint will always have a home on disk, guaranteeing forward progress. Using copy-on-write as the sole update strategy has pros and cons. The upside is that it is simple to guarantee operation atomicity, and data-structure integrity. The downside is that performance relies on the ability to maintain large extents of free contiguous disk areas. In addition, random updates to a file tend to fragment it, destroying sequentiality. A good defragmentation algorithm is required; this is described in section 5. Checksums are calculated at the point where a block is written to disk. At the end of a checkpoint, all the checksums match, and the checksum at the root block reflects the entire tree. Metadata nodes record the *generation number* when they were created. This is the serial number of their checkpoint. B-tree pointers store the expected target generation number, this allows detection of phantom or misplaced writes on the media. Generation numbers and checksums serve together to verify on disk block content. 3.4 Extent allocation tree The extent allocation tree holds extent-items, each describing a particular contiguous on-disk area. There could be many references to an extent, each addressing only part of it. For example, consider file foo that has an on-disk extent 100KB - 128KB. File foo is cloned creating file bar. Later on, a range of 10KB is overwritten in bar. This could cause the following situation: <table> <thead> <tr> <th>File</th> <th>On disk extents</th> </tr> </thead> <tbody> <tr> <td>foo</td> <td>100-128KB</td> </tr> <tr> <td>bar</td> <td>100-110KB, 300-310KB, 120-128KB</td> </tr> </tbody> </table> Figure 10: Foo and its clone bar share parts of the extent 100-128KB. Bar has an extent in the middle that has been overwritten, and is now located much further away on disk. There are three pointers into extent 100-128KB, covering different parts of it. The extent-item keeps track of all such references, to allow moving the entire extent at a later time. An extent could potentially have a large number of back references, in which case the extent-item does not fit in a single b-tree leaf node. In such cases, the item spills and takes up more than one leaf. A back reference is logical, not physical. It is constructed from the root_object_id, generation_id, tree level, and lowest object-id in the pointing block. This allows finding the pointer, after a lookup traversal starting at the root object-id. 3.5 Fsync fsync is an operation that flushes all dirty data for a particular file to disk. An important use case is by databases that wish to ensure that the database log is on disk, prior to committing a transaction. Latency is important; a transaction will not commit until the log is fully on disk. A naive fsync implementation is to checkpoint the entire filesystem. However, that suffers from high latency. Instead, modified data and metadata related to the particular file are written to a special log-tree. Should the system crash, the log-tree will be read as part of the recovery sequence. This ensures that only minimal and relevant modifications will be part of the fsync code path. 3.6 Concurrency Modern systems have multiple CPUs with many cores. Taking advantage of this computing power through parallelism is an important consideration. Old generations are immutable on disk, and their access does not require locking. The in-memory under-modification pages requires protection. Since data is organized in trees, the strategy is to use a read/write locking scheme. Tree traversal is initiated in read mode. When a node that requires update is encountered, the lock is converted to write mode. If a block $B$ requires COW, traversal is restarted. The new traversal stops at $\text{parent}(B)$, COWs $B$, modifies the parent pointer, and continues down. Tree traversals are top-down. They start at the top, and walk down the tree, it is unnecessary to walk back up. 4 Multiple Device Support Linux has device-mapper (DMs) subsystems that manage storage devices. For example, LVM and mdadm. These are software modules whose primary function is to take raw disks, merge them into a virtually contiguous block-address space, and export that abstraction to higher level kernel layers. They support mirroring, striping, and RAID5/6. However, checksums are not supported, which causes a problem for BTRFS. For example, consider a case where data is stored in RAID-1 form on disk, and each 4KB block has an additional copy. If the filesystem detects a checksum error on one copy, it needs to recover from the other copy. DMs hide that information behind the virtual address space abstraction, and return one of the copies. To circumvent this problem, BTRFS does its own device management. It calculates checksums, stores them in a separate tree, and is then better positioned to recover data when media errors occur. A machine may be attached to multiple storage devices; BTRFS splits each device into large chunks. The rule of thumb is that a chunk should be about 1% of the device size. At the time of writing 1GB chunks are used for data, and 256MB chunks are used for metadata. A chunk tree maintains a mapping from logical chunks to physical chunks. A device tree maintains the reverse mapping. The rest of the filesystem sees logical chunks, and all extent references address logical chunks. This allows moving physical chunks under the covers without the need to backtrace and fix references. The chunk/device trees are small, and can typically be cached in memory. This reduces the performance cost of an added indirection layer. Physical chunks are divided into groups according to the required RAID level of the logical chunk. For mirroring, chunks are divided into pairs. Table 1 presents an example with three disks, and groups of two. For example, logical chunk $L_1$ is made up of physical chunks $C_{11}$ and $C_{21}$. Table 2 shows a case where one disk is larger than the other two. Table 1: To support RAID1 logical chunks, physical chunks are divided into pairs. Here there are three disks, each with two physical chunks, providing three logical chunks. Logical chunk $L_1$ is built out of physical chunks $C_{11}$ and $C_{21}$. <table> <thead> <tr> <th>logical chunks</th> <th>disk 1</th> <th>disk 2</th> <th>disk 3</th> </tr> </thead> <tbody> <tr> <td>$L_1$</td> <td>$C_{11}$</td> <td>$C_{21}$</td> <td></td> </tr> <tr> <td>$L_2$</td> <td></td> <td>$C_{22}$</td> <td>$C_{31}$</td> </tr> <tr> <td>$L_3$</td> <td>$C_{12}$</td> <td></td> <td>$C_{32}$</td> </tr> </tbody> </table> Table 2: One large disk, and two small disks, in a RAID1 configuration. For striping, groups of $n$ chunks are used, where each physical chunk is on a different disk. For example, Table 3 shows stripe width of four ($n = 4$), with four disks, and three logical chunks. <table> <thead> <tr> <th>logical chunks</th> <th>disk 1</th> <th>disk 2</th> <th>disk 3</th> <th>disk 4</th> </tr> </thead> <tbody> <tr> <td>$L_1$</td> <td>$C_{11}$</td> <td>$C_{21}$</td> <td>$C_{31}$</td> <td>$C_{41}$</td> </tr> <tr> <td>$L_2$</td> <td>$C_{12}$</td> <td>$C_{22}$</td> <td>$C_{32}$</td> <td>$C_{42}$</td> </tr> <tr> <td>$L_3$</td> <td>$C_{13}$</td> <td>$C_{23}$</td> <td>$C_{33}$</td> <td>$C_{43}$</td> </tr> </tbody> </table> Table 3: Striping with four disks, stripe width is $n = 4$. Three logical chunks are each made up of four physical chunks. At the time of writing, RAID levels 0,1, and 10 are supported. In addition, there is experimental code by Intel$^\text{TM}$ that supports RAID5/6. The core idea in higher RAID levels is to use chunk groups with Reed-Solomon [10] parity relationships. For example, Table 4 shows a RAID6 configuration where logical chunks $L_{1,2,3}$ are constructed from doubly protected physical chunks. For example, $L_1$ is constructed from $\{C_{11}, C_{21}, C_{31}, C_{41}\}$. Chunks $\{C_{11}, C_{12}\}$ hold data in the clear, $C_{31} = C_{21} \oplus C_{11}$, and $C_{41} =$ Function $Q$ is defined by Reed-Solomon codes such that any double chunk failure combination would be recoverable. <table> <thead> <tr> <th>logical chunks</th> <th>physical disks</th> </tr> </thead> <tbody> <tr> <td></td> <td>$D_1$</td> </tr> <tr> <td>$L_1$</td> <td>$C_{11}$</td> </tr> <tr> <td>$L_2$</td> <td>$C_{12}$</td> </tr> <tr> <td>$L_3$</td> <td>$C_{13}$</td> </tr> </tbody> </table> Table 4: A RAID6 example. There are four disks, $\{D_1, D_2, P, Q\}$. Each logical chunk has a physical chunk on each disk. For example, the raw data for $L_1$ is striped on disks $D_1$ and $D_2$. $C_{31}$ is the parity of $C_{11}$ and $C_{21}$. $C_{41}$ is the calculated $Q$ of chunks $C_{11}$ and $C_{12}$. Replicating data and storing parity is costly overhead for a storage system. However, it allows recovery from many media error scenarios. The simplest case is RAID1, where each block has a mirror copy. When the filesystem tries to read one copy, and discovers an IO or checksum error, it tries the second copy. If the second copy also has an error, then the data is lost. Back references have to be traced up the filesystem tree, and the file has to be marked as damaged. If the second copy is valid, then it can be returned to the caller. In addition, the first copy can be overwritten with the valid data. A proactive approach, where a low intensity scrub operation is continuously run on the data, is also supported. There is flexibility in the RAID configuration of logical chunks. A single BTRFS storage pool can have various logical chunks at different RAID levels. This decouples the top level logical structure from the low-level reliability and striping mechanisms. This is useful for operations such as: 1. Changing RAID levels on the fly, increasing or decreasing reliability 2. Changing stripe width: more width allows better bandwidth 3. Giving different subvolumes different RAID levels. Perhaps some subvolumes require higher reliability, while others need more performance at the cost of less reliability. The default behavior is to use RAID1 for filesystem metadata, even if there is only one disk. This gives the filesystem a better chance to recover when there are media failures. Common operations that occur in the lifetime of a filesystem are device addition and removal. This is supported by a general balancing algorithm that tries to spread allocations evenly on all available disks, even as the device population changes. For example, in Table 5(a) the system has two disks in a RAID1 configuration; each disk holds $\frac{1}{2}$ of the raw data. Then, a new disk is added, see Table 5(b). The goal of the balancing code is to reach the state shown in Table 5(c), where data is spread evenly on all three disks, and each disk holds $\frac{1}{3}$ of the raw data. <table> <thead> <tr> <th>(a) 2 disks</th> <th>logical chunks</th> <th>disk 1</th> <th>disk 2</th> </tr> </thead> <tbody> <tr> <td>L1</td> <td>$C_{11}$</td> <td>$C_{21}$</td> <td></td> </tr> <tr> <td>L2</td> <td>$C_{12}$</td> <td>$C_{22}$</td> <td></td> </tr> <tr> <td>L3</td> <td>$C_{13}$</td> <td>$C_{23}$</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>(b) disk added</th> <th>disk 3</th> </tr> </thead> <tbody> <tr> <td>L1</td> <td>$C_{11}$</td> </tr> <tr> <td>L2</td> <td>$C_{12}$</td> </tr> <tr> <td>L3</td> <td>$C_{13}$</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(c) rebalance</th> </tr> </thead> <tbody> <tr> <td>L1</td> </tr> <tr> <td>L2</td> </tr> <tr> <td>L3</td> </tr> </tbody> </table> Table 5: Device addition. Initially (a), there are two disks. In state (b), another disk is added, it is initially empty. State (c) shows the goal: data spread evenly on all disks. Here, physical chunks $C_{12}$ and $C_{23}$ were moved to disk 3. When a device is removed, the situation is reversed. From a $\frac{1}{3}$ ratio (as in Table 5(c)), the system has to move back to $\frac{1}{2}$ ratio. If there are unused chunks on the remaining disks, then the rebalancer can complete the task autonomously. However, we are not always that fortunate. If data is spread across all chunks, then trying to evict a chunk requires traversal through the filesystem, moving extents, and fixing references. This is similar to defragmentation, which is described in Section 5. 5 Defragmentation At the time of writing, the defragmentation problem is addressed in two separate ways. In order to defrag a file, it is read, COWed, and written to disk in the next checkpoint. This is likely to make it much more sequential, because the allocator will try to write it out in as few extents as possible. The downside is that sharing with older snapshots is lost. In many cases, this simple algorithm is sufficient. In some cases, a more sophisticated approach, that maintains sharing, is needed. When shrinking a filesystem, or evicting data from a disk, a relocator is used. This is an algorithm that scans a chunk and moves the live data off of it, while maintaining sharing. Relocation is a complex procedure, however, and disregarding sharing could cause an increase in space usage, at the point where we were trying to reduce space consumption. The relocator works on a chunk by chunk basis. The general scheme is: 1. Move out all live extents (in the chunk) 2. Find all references into a chunk 3. Fix the references while maintaining sharing The copy-on-write methodology is used throughout; references are never updated in place. Figure 11 shows a simple example. One data extent, colored light blue, needs to be moved. ![Diagram of relocating extents](image) Figure 11: Do a range lookup in the extent tree, find all the extents located in the chunk. Copy all the live extents to new locations. In order to speed up some of the reference tracking, we follow back-references to find all upper level tree blocks that directly or indirectly reference the chunk. The result is stored in a DAG like data structure called a `backref_cache`, see Figure 12. ![Figure 12: Upper level nodes stored in a backref_cache.](image) A list of sub-volume trees that reference the chunk from the backref_cache is calculated; these trees are subsequently cloned, see Figure 13. This operation has two effects: (1) it freezes in time the state of the sub-volumes (2) it allows making off-to-the-side modifications to the sub-volumes while affording the user continued operation. ![Figure 13: Reloc trees. In this example, sub-volume1 is cloned. Changes can be made to the clone.](image) Next, all the references leading to the chunks are followed, using back-references. COW is used to update them in the reloc trees, see Figure 14. The last step is to merge the reloc trees with the originals. We traverse the trees. We find sub-trees that are modified in the reloc tree but where corresponding parts in the fs tree are not modified. These sub-trees in the reloc tree are swapped with their older counterparts from the fs tree. The end result is new fs-trees. Finally, we drop the reloc trees, they are no longer needed. Figure 15 depicts an example. The new filesystem DAG is now in memory, and has the correct sharing structure. It takes the same amount of space as the original, which means that filesystem space usage stays the same. Writing out the DAG to disk can be done onto contiguous extents resulting in improved sequentiality. Once that is done, the old chunk can be discarded. 6 Performance There are no agreed upon standards for testing filesystem performance. While there are industry benchmarks for the NFS and CIFS protocols, they cover only a small percent of the workloads seen in the field. At the end of the day, what matters to a user is performance for his particular application. The only realistic way to check which filesystem is the best match for a particular use case, is to try several filesystems, and see which one works best. As we cannot cover all use cases, we chose several common benchmarks, to show that BTRFS performs comparably with its contemporaries. At the time of writing, the major Linux filesystems, aside from BTRFS, are XFS and Ext4. These are significantly more mature systems, and we do not expect to perform orders of magnitude better. Our contribution is a filesystem supporting important new features, such as snapshots and data checksums, while providing reasonable performance under most workloads. Two storage configurations were chosen: a hard disk, and an SSD. 6.1 Hard disk All of the hard disk tests where run on a single socket 3.2 Ghz quad core x86 processor with 8 gigabytes of ram on a single SATA drive with a 6gb/s link. The first test was a Linux kernel make, starting from a clean tree of source files. A block trace was collected, starting with the `make -j8` command. This starts eight parallel threads that perform C compilation and linking with `gcc`. Figure 16 compares throughput, seek count, and IOps between the three filesystems. Ext4 has slightly higher throughput than BTRFS and XFS, averaging a little less than twice as much throughput. All filesystems maintain about the same seeks per second, with BTRFS on average seeking less. The spike at the beginning of the run for BTRFS is likely to do with the initial bit of copy-on-writing that bounces between different block groups. Once additional block groups are allocated to deal with the meta-data load, everything smooths out. The IO operations per second are a little closer together, but again Ext4 wins out overall. The compile times are all within a few seconds of each other. The kernel compile test tends to be a bit meta-data intensive, and it is a good benchmark for an application that has a heavy meta-data load. The overall picture is that the filesystems are generally on par. Figure 16: A Kernel compile, all filesystems exhibit similar performance. The FFSB test attempts to mimic a mail server. It creates 500,000 files in 1,000 directories all ranging from 1KB to 1MB in size, weighted more towards 32KB size or less. Once it creates the files, it spends 300 seconds doing either creates, deletes or reading entire files, all with a block size of 4KB. The test weighs reading higher than creating, and creating higher than deleting in order to try and represent how a mail server would work. The test can be modified to use any number of threads, for simplicity, we used one thread. FFSB measures throughput and transactions per second, shown in 17, and 18. Figure 17: Throughput during an FFSB mail server run. XFS uses the least bandwidth, Ext4 uses the most, and BTRFS is in the middle. Figure 18: Transactions per second in the FFSB. BTRFS shows comparable performance. This workload favors Ext4, with BTRFS trailing slightly behind and XFS performing at half the speed of BTRFS. The final test deals with write performance. Tiobench writes a given size to a file with a specified number of threads. We used a 2000MB file and ran with 1, 2, 4, 8, and 16 threads. Both tests show BTRFS running the fastest overall, and in the random case dominating the other two file systems. The random case is probably much better for BTRFS due to its write anywhere nature, and also because we use range locking for writing instead of a global inode lock which makes it do parallel operations much faster. Performance declines somewhat with additional threads due to contention on the shared inode mutex. ![Figure 19: TIO benchmark, sequential.](image) These are just three tests and by no means exercise all the various ways one can use a filesystem. Hopefully, they are representative of the ways most file systems are used. In all these cases, BTRFS was in the same ballpark as its more mature counterparts. ### 6.2 Flash disk Flash disks are becoming ubiquitous, replacing traditional hard disks in many modern laptops. Smartphones and embedded devices running Linux commonly use flash disks as well. This has motivated an ongoing effort to optimize BTRFS for Solid State Drives (SSDs). Here, we describe some of this work; the code is now part of Linux kernel 3.4. The hardware used was a FusionIO\textsuperscript{TM} device. Figure 21 depicts performance for the Linux 3.3 kernel, with BTRFS creating 32 million empty files. The graph was generated by Seekwatcher [4], a tool that visualizes disk head movement. In the top graph X axis is time, the Y axis is disk head location, reads are colored blue, and writes are colored green. The bottom graph tracks throughput. The filesystem starts empty, filling up with empty files as the test progresses. The pattern emerging from the graph is a continuous progression, writing new files at the end. File metadata is batched and written sequentially during checkpoints. Checkpoints take place every thirty seconds. They incur many random reads, appearing as a scattering of blue dots. The reads are required to access the free space allocation tree, a structure too big to fit in memory. The additional disk seeks slow down the system considerably, as can be seen in the throughput graph. The reason writes do not progress linearly is that checkpoints, in addition to writing new data, also free up blocks; these are subsequently reused. Figure 21: Performance in the kernel 3.3. File creation rate is unsteady, throughput is a series of peaks and valleys. In order to improve performance, the number of reads had to be reduced. The core problem turned out to be the way the Linux virtual memory (VM) system was used. The VM assumes that allocated pages will be used for a while. BTRFS, however, uses many temporary pages due to its copy-on-write nature, where data can move around on the disk. This mismatch was causing the VM to hold many out of date pages, reducing cache effectiveness. This in turn, caused additional paging in the free space allocation tree, which was thrown out of cache due to cache pressure. The fix was for BTRFS to try harder to discard from the VM pages that have become invalid. Figure 22 shows the resulting performance improvement. The number of reads is much reduced, and they are concentrated in the checkpoint intervals. Throughput is steady at about 125MB/sec, and the rate of file creation is 150,000 files per second. ![File Creation -- Btrfs 4K (v3.4) Disk IO](image) ![Throughput](image) Figure 22: Performance in the kernel 3.4, with 4KB metadata pages. Testing showed that using larger page sizes was beneficial on flash. Figure 23 shows the effects of using a 16KB metadata page size. The rate of file creation is 170,000 files per second. Using a larger metadata page size is also important for RAID5/6 integration, as it allows putting one page on a single RAID stripe. Figure 23: Performance in the kernel 3.4, with 16KB metadata pages. On the same hardware, XFS performed 115,000 files/second, and Ext4 performed 110,000 files/second. We believe this makes the filesystems comparable. 7 Summary BTRFS is a relatively young Linux filesystem, working its way towards achieving default status on many Linux distributions. It is based on copy-on-write, and supports efficient snapshots and strong data integrity. As a general purpose filesystem, BTRFS has to work well on a wide range of hardware, from enterprise servers to smartphones and embedded systems. This is a challenging task, as the hardware is diverse in terms of CPUs, disks, and memory. This work describes the main algorithms and data layout of BTRFS. It shows the manner in which copy-on-write b-trees, reference-counting, and extents are used. It is the first to present a defragmentation algorithm for COW based filesystems in the presence of snapshots. 8 Acknowledgments We would like to thank Zheng Yan, for explaining the defragmentation algorithm, and Valerie Aurora, for an illuminating LWN paper on BTRFS. Thanks is also due to the many BTRFS developers who have been working hard since 2007 to bring this new filesystem to the point where it can be used by Linux customers and users. Finally, we would like to thank friends who reviewed the drafts, catching errors, and significantly improving quality: Gary Valentin and W.W.. References
{"Source-Url": "http://domino.watson.ibm.com/library/CyberDig.nsf/papers/6E1C5B6A1B6EDD9885257A38006B6130/$File/rj10501.pdf", "len_cl100k_base": 10768, "olmocr-version": "0.1.50", "pdf-total-pages": 37, "total-fallback-pages": 0, "total-input-tokens": 69124, "total-output-tokens": 12891, "length": "2e13", "weborganizer": {"__label__adult": 0.00034308433532714844, "__label__art_design": 0.00047135353088378906, "__label__crime_law": 0.0003516674041748047, "__label__education_jobs": 0.000927448272705078, "__label__entertainment": 0.00013315677642822266, "__label__fashion_beauty": 0.00018703937530517575, "__label__finance_business": 0.0005269050598144531, "__label__food_dining": 0.0003333091735839844, "__label__games": 0.0008058547973632812, "__label__hardware": 0.010223388671875, "__label__health": 0.00045561790466308594, "__label__history": 0.00043845176696777344, "__label__home_hobbies": 0.00017368793487548828, "__label__industrial": 0.0007853507995605469, "__label__literature": 0.00022232532501220703, "__label__politics": 0.00026035308837890625, "__label__religion": 0.0004177093505859375, "__label__science_tech": 0.384521484375, "__label__social_life": 0.000102996826171875, "__label__software": 0.044586181640625, "__label__software_dev": 0.552734375, "__label__sports_fitness": 0.00023353099822998047, "__label__transportation": 0.0006189346313476562, "__label__travel": 0.00019299983978271484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47737, 0.03605]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47737, 0.60841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47737, 0.91426]], "google_gemma-3-12b-it_contains_pii": [[0, 192, false], [192, 1632, null], [1632, 3683, null], [3683, 4670, null], [4670, 7365, null], [7365, 9068, null], [9068, 11077, null], [11077, 11650, null], [11650, 13016, null], [13016, 13586, null], [13586, 14530, null], [14530, 16187, null], [16187, 18685, null], [18685, 19674, null], [19674, 20319, null], [20319, 23134, null], [23134, 25201, null], [25201, 25990, null], [25990, 28021, null], [28021, 29773, null], [29773, 32012, null], [32012, 33936, null], [33936, 35363, null], [35363, 36284, null], [36284, 36703, null], [36703, 37042, null], [37042, 39381, null], [39381, 40067, null], [40067, 40284, null], [40284, 41055, null], [41055, 42326, null], [42326, 43428, null], [43428, 44276, null], [44276, 44494, null], [44494, 45714, null], [45714, 47224, null], [47224, 47737, null]], "google_gemma-3-12b-it_is_public_document": [[0, 192, true], [192, 1632, null], [1632, 3683, null], [3683, 4670, null], [4670, 7365, null], [7365, 9068, null], [9068, 11077, null], [11077, 11650, null], [11650, 13016, null], [13016, 13586, null], [13586, 14530, null], [14530, 16187, null], [16187, 18685, null], [18685, 19674, null], [19674, 20319, null], [20319, 23134, null], [23134, 25201, null], [25201, 25990, null], [25990, 28021, null], [28021, 29773, null], [29773, 32012, null], [32012, 33936, null], [33936, 35363, null], [35363, 36284, null], [36284, 36703, null], [36703, 37042, null], [37042, 39381, null], [39381, 40067, null], [40067, 40284, null], [40284, 41055, null], [41055, 42326, null], [42326, 43428, null], [43428, 44276, null], [44276, 44494, null], [44494, 45714, null], [45714, 47224, null], [47224, 47737, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47737, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47737, null]], "pdf_page_numbers": [[0, 192, 1], [192, 1632, 2], [1632, 3683, 3], [3683, 4670, 4], [4670, 7365, 5], [7365, 9068, 6], [9068, 11077, 7], [11077, 11650, 8], [11650, 13016, 9], [13016, 13586, 10], [13586, 14530, 11], [14530, 16187, 12], [16187, 18685, 13], [18685, 19674, 14], [19674, 20319, 15], [20319, 23134, 16], [23134, 25201, 17], [25201, 25990, 18], [25990, 28021, 19], [28021, 29773, 20], [29773, 32012, 21], [32012, 33936, 22], [33936, 35363, 23], [35363, 36284, 24], [36284, 36703, 25], [36703, 37042, 26], [37042, 39381, 27], [39381, 40067, 28], [40067, 40284, 29], [40284, 41055, 30], [41055, 42326, 31], [42326, 43428, 32], [43428, 44276, 33], [44276, 44494, 34], [44494, 45714, 35], [45714, 47224, 36], [47224, 47737, 37]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47737, 0.14919]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
ff996f0d8c372f331b13b5643cacde4da82c8ce9
Computer Sciences Department Error Management in the Pluggable File System Douglas Thain Miron Livny Technical Report #1448 October 2002 Error Management in the Pluggable File System Douglas Thain and Miron Livny 9 October 2002 Technical Report 1448 Computer Sciences Department University of Wisconsin Abstract Distributed computing continues to be an alphabet-soup of services and protocols. No single system for managing CPUs or I/O devices has emerged (or is likely to emerge) as a universal solution. Therefore, distributed applications require adapters in order to plug themselves into existing systems. The difficulty of building such adapters lies not in normal operations, but in the complications of failures and other unusual situations. We demonstrate this with the Pluggable File System, an adapter for connecting POSIX applications to remote I/O services. We offer a detailed discussion of the construction of the system while dealing with failures and other events that are not trivially mapped into the application's expectations. The key insight is that correct I/O management requires coordination with CPU management. We conclude with some practical advice for others constructing similar software. 1 1. Introduction The field of distributed computing has produced countless systems for harnessing remote CPUs and accessing remote data. Despite the intentions of their designers, no single system has achieved universal acceptance or deployment. Each carries its own strengths and weaknesses in performance, manageability, and reliability. A renewed interest in world-wide computational systems called grids [17] is increasing the number of systems and interfaces in play. A complex ecology of distributed systems is here to stay. The result is an hourglass model of distributed computing, shown in Figure 1. Users submit batch jobs to a variety of different interfaces. Each system interfaces with a process through standard POSIX interfaces such as main and exit. This interface is so simple that it is rarely discussed and has no name, yet it is certainly universal and critical to the wide deployment of applications across batch systems. Equally important is the common interface to I/O services. An operating system transforms standard operations such as read and write into the low-level block and network operations needed for a local or distributed file systems. Yet, a common interface to I/O operations is not enough. Applications require common conventions for naming and data access beyond the simple names of the I/O functions. The wide variety of systems available to a user through remote batch systems all have varying degrees of access to local and distributed systems scattered across the grid. Although many file systems aim to provide a universal naming scheme across the entire internet, none are actually deployed to this degree. Without universal naming and data access, a common interface has little value. To remedy this situation, we advocate the use of inter- position agents, or sometimes adapters for short. These devices transform standard interfaces such as POSIX I/O into remote I/O protocols not normally found in an operating system kernel. In effect, an adapter allows an application to bring its filesystem along wherever it goes. This releases the dependence on the kernel details of the execution site while preserving the use of standard interfaces. In this paper, we present the Pluggable File System (PFS), a user-level interposition agent that transforms an application's standard POSIX I/O operations into a variety of remote I/O services. We describe the overall architecture and naming scheme of the system, and offer some practical discussion on the fine details necessary to make PFS operate with real applications. The multiplexing of a standard interface is a standard technique in any programmer's repertoire. In the realm of I/O, the mapping is quite simple: read becomes a read, write becomes a write, and so on. The real difficulty lies in the vast new kinds of failure in a distributed system: servers crash, networks fail, and disks fill. Furthermore, applications frequently require operations that have no obvious analogue in a remote system. How can such difficulties be reconciled with the "no-fuzz" requirements of large-scale distributed computing? Our primary contribution is a detailed discussion of the problem of new error modes. We are not exploring techniques of fault tolerance, roughly defined as masking failures through retry or reservation of extra capacity. Rather, we are studying the problem of error management, in which we seek to simple direct an honest message about a failure along the correct channel. The key to error management, as suggested in Figure 1, is to recognize that correct I/O management must carefully interact with process management. The two do not stand alone. To discuss these problems in detail, we must necessarily present the Pluggable File System in a fair amount of detail. We begin by describing the architecture and capabilities of PFS. We then move to examine how to handle the problems of missing operations and unusual failure modes. We conclude with a discussion of some practical problems that arose in the construction of PFS, in the hope that it will assist others building such systems. 2. User's Experience To offer some of the flavor of PFS, we begin by outlining how a user might interact with it in an interactive setting. The pfsrc run program starts a new application with PFS attached: ``` % pfsrc run tcsh ``` With PFS attached, the local filesystem is still visible through all of its normal filenames: ``` % cat /etc/passwd % grep word /usr/dict/words ``` In addition, remote resources appear under names in the root directory reflecting the protocol used to access them. These are quite similar to the Uniform Resource Locators used by the world wide web and other applications. ``` % vi http://www.yahoo.com/index.html % less ftp://ftp.cs.wisc.edu/RoadMap ``` Finally, remote directory trees may be spliced into the local filesystem through the use of a mount list. Each entry in the list consists of a name in the logical file system to be redirected to a remote directory or file. Here is a simple mount list: ``` /in /chirp/nest.wisc.edu/indir /out /kangaroo/kang.wisc.edu/outdir ``` And here's how it might be used: ``` % pfsrc run -mount list mount.file tcsh % grep function /in/* .c > /out/results ``` 3. Architecture PFS is built with Bypass [35], a general-purpose tool for building interposition agents. It takes the form of a shared library which may be forcibly inserted into any dynamically linked process. This capability is present in most Unix-like operating systems, although it is activated in a variety of ways. pfsrc run is a script which hides such details from the user. Bypass preserves all of the existing entry points to POSIX functions at the standard library interface, so PFS is free to use ordinary libraries and standard routines internally. It makes use of large, general-purpose library such as the Secure Sockets Layer and the Globus Toolkit [16] without modifications. Figure 2 shows the internal structures of PFS. They are quite similar to the process I/O structures in a standard operating system kernel, so PFS might be considered a virtual operating system. Strictly speaking, PFS does not modify any files directly. A file is a persistent data structure on stable storage with attributes such as a size, owner, creation time, and so on. PFS has many internal data structures that represent files, but relies on other systems to actually examine and modify the files themselves. PFS has four levels of data structures that represent files. A file descriptor (fd) is an integer allocated by open or dup and is the only user-visible handle to an open file. A file descriptor contains no identifying information about a file, but only serves as a reference to a file pointer below. A file pointer records the current seek pointer (csp) used to remember an application's position in a file. The csp is Figure 2. Architecture of the Pluggable File System modified by operations such as lseek and is implicitly used by operations such as read and write to determine what portion of a file to examine or modify. A file pointer also refers to a file object below. A file object represents a file currently in use by the application. It records the type of storage system where the file is stored, the name of the file, and any other private data, such as open sockets, necessary to access the file. It refers to a device driver below to actually perform file operations. A device driver represents an entire system or protocol for accessing data, such as HTTP or simply the local file system. It implements all of the I/O operations that must be passed to a remote system. Notice that there is no generic data structure that represents an open network connection or a single remote server. Such details are encapsulated inside the device driver, as the number and type of remote connections necessary is quite dependent on the details of the protocol involved. As Figure 2 suggests, data structures are shared at every level. For example, file descriptors 1 and 2 share a file pointer. This would occur if file descriptor 2 was created by a call to dup. Also, file descriptors 1, 2, and 7 all share the same file object. This would occur by a call to open with the same file name used to access file descriptor 1. PFS has a large number of entry points for all variety of I/O operations. However, we may classify the large majority of them into two categories: operations on file descriptors and operations on file names. The former traverse most of the data structures in PFS, while the latter take a more direct route to the device drivers. Operations such as read, write, and lseek operate on file descriptors. Upon entering PFS, these commands check the validity of the given fd, and then descend the various data structures. read and write take the csp from the corresponding file pointer and use it as an argument to call a read method in the corresponding file object. The file object, through the device driver, performs the necessary remote operation. Other operations such as rename, stat, and delete operate on file names. Upon entering PFS, these commands first pass through the name resolver, which may transform the program-supplied name (or names) according to a variety of rules and systems. The name resolver is discussed in more detail below. Then, the transformed names are passed directly to the device driver, which performs the operation on the remote system. open is a special case. Figure 2 shows how it interacts with the system in three ways. First, it transforms the given name using the name resolver and the mount list, if any. Second, it contacts the named device driver and attempts to open the file. Third, if successful, it allocates a file descriptor, pointer, and object according to the newly open file and installs them in the tree of data structures. The bold items in Figure 2 highlight a newly opened file at descriptor 9 using the Chip driver. Most UNIX applications access file through explicit operations such as read and write. However, files may also be memory mapped. In a standard operating system, a memory mapped file is a separate virtual memory segment whose backing store is kept in the file system rather than in the virtual memory pool. PFS accomplishes the same thing using its own underlying drivers, thus reducing memory mapped files into the same mechanisms as other open files. When the user establishes a memory mapping with the mmap, PFS allocates a corresponding piece of virtual memory from its own heap via the standard malloc allocator. However, the application’s permission to read and write the memory are removed by using mprotect. When the user attempts to access the memory, the system raises a SIGSEGV signal indicating a memory access violation. PFS traps this signal and uses the SA_INFO option to extract the address of the memory reference. If the address falls within a memory mapped file managed by PFS, it passes control to the page fault handler shown in Figure 2. The page fault handler currently performs demand paging with writeback at close. As read page faults occur, they are satisfied by issuing read operations on the underlying driver. As write faults occur, data are simply written the local memory region. When the user calls munmap to delete the segment, dirty pages are written back to the target storage. Naturally, the default page size used by the underlying system is much too small for the latency of I/O operations over the wide area network. The user may select an appropriate page size through an environment variable. This memory-mapping facility is quite simple. It does not perform any pageouts in order to fit the segment into physical memory. Rather, the whole segment is stored in virtual memory, under the assumption that paging out to local disk is much faster than paging out to remote storage. In addition, there is no facility for enforcing coherence on processes the communicate via a shared memory segment. This is generally impossible, as most remote I/O protocols have no facilities for coherence. However, this has not proven to be a significant obstacle, as our primary target of sequential scientific applications (by definition) do not require distributed shared memory. 4. Drivers PFS is equipped with a variety of drivers for communicating with external storage systems. The C++ interface to a driver is shown in Figure 3. This interface lets the user perform single operations on named files. Two methods in the interface bear explanation: open and getdir. The open method found in the driver is a factory method that binds a file name to a file object, which is shown in class pfs_driver { pfs_file * open( path, flags, mode ); pfs_dir * getdir( path ); int stat( path, buf ); int lstat( path, buf ); int unlink( path ); int access( path, mode ); int chmod( path, mode ); int rename( oldname, newname ); int chdir( path ); int readlink( path, buf, bufsiz ); int mkdir( path, mode ); int rmdir( path ); }; Figure 3. Driver Interface class pfs_file { int close(); pfs_size_t read( data, length, pos ); pfs_size_t write( data, length, pos ); int stat( buf ); int truncate( length ); int sync(); int fcntl( cmd, arg ); int ioctl( cmd, arg ); int fchmod( mode ); pfs_size_t get_size(); int is_seekable(); int is_local(); int isatty(); }; Figure 4. File Interface class pfs_dir { struct dirent * read(); void rewind(); pfs_size_t tell(); void seek( pfs_size_t pos ); int append( char *name ); } Figure 5. Directory Interface Figure 4. Once opened, the file object serves as the focal point for operations on file descriptors or mapped memory. To support access through multiple file pointers at once, the file object appears to be random access: it accepts an offset argument to read and write. However, not all driver types actually support random access to files. For this reason, an additional method, is_seekable is used by PFS to determine whether an lseek on such a file will succeed. The consequences of attempting to lseek within a sequential-only driver are explored below. The getdir method found in the driver does not correspond to any single function normally found in the standard library or at the kernel interface. Few remote I/O systems have a stateful set of commands for scanning a directory, such as opendir, readdir, and closedir. Instead, a single atomic operation retrieves a whole directory listing. The upper layers of PFS are responsible for implementing the stateful POSIX directory-scanning commands. Each of the various drivers in PFS has some special processing and unusual cases. Let's explore each of them in turn. 4.1. Local The local driver is very simple. It passes all file system operations directly through to the underlying kernel. The local driver is used by default when a file name does not map to any remote system. Thus, a program using PFS still has access to all data in the local filesystem. This comes with very little overhead. The trapping mechanism provide by Bypass only adds a few microseconds to every call. [35] By default, memory mapped to a local file uses the native memory-mapping mechanism for performance. However, the user-level mechanism provided by PFS may optionally be enabled in order to trace an application's memory access patterns. 4.2. HTTP The HTTP driver is also quite simple, but provides much less functionality than the local driver. HTTP simply doesn't support many of the operations necessary for a general file system. Arbitrary files may be read sequentially. The stat command may be used to examine a file, but the only available information is the size of the file. Directory listings are not possible. HTTP performs whole file transfers, therefore every concurrent open file requires its own connection. After a file is closed, its connection is cached by the driver for possible future use. Other interposition systems such as UFO [4] have used whole-file fetching of files available through sequential-access protocols such as HTTP, thus simplifying the problems of random access and permitting a cache of recently- used files. We have not taken this route for two reasons. First, whole-file fetching introduces a large latency when a file is first opened. This is an unnecessary price when an application could take advantage of overlapped CPU and I/O access by reading streamed files sequentially. Second, few remote I/O protocols have a reliable mechanism for ensuring synchronization between shared and cached files. The user who is willing to deal with both of these problems may explicitly make a local copy (via PFS, of course) and then operate on it directly. 4.3. FTP In contrast to HTTP, the FTP driver is very full-featured. Depending on the variant of the server, sequential access, directory listings, and querying of meta-data may be possible. If a server requires a password, an interactive user may type it at the console while the application is blocked. These features come at considerable complexity. Many file operations require attempting several different command variations in order to support the many flavors of the protocol. In addition, each open file requires its own interaction with the server: one TCP stream for control, and one TCP stream for data. The FTP driver also supports the GSI-enabled FTP [5] variant. This protocol introduces strong authentication to remote services without using cleartext passwords. In addition, more efficient commands for partial-file access are available, along with high-throughput sequential access via multiple TCP streams. 4.4. Chirp The Chirp protocol, spoken by the NeST storage appliance, is somewhat easier to integrate with PFS due to its similarity with the POSIX interface. Chirp permits random access to arbitrary files, meta-data requests, directory listings, and more. In addition, access to multiple files may be interleaved on the same connection, so the driver only needs to maintain one connection per server accessed, rather than one per open file. The Ghost driver is a research variant of the Chirp driver. This driver redirects operations to a nearby ghost, which is a localized buffer cache for a remote NeST storage appliance. A set of ghosts with a parent NeST forms a migratory file service, which is discussed further by Bent, et al. [7] 4.5. Kangaroo The Kangaroo protocol is even easier to integrate, although it offers less power than Chirp. The Kangaroo system is designed to offload all of an application’s I/O requests to a single nearby server, where they can be satisfied through buffering, caching, and remote I/O. Thus, the Kangaroo device driver simply performs trivial RPCs on a single server. No connection management is necessary. Kangaroo does not support directory listings. A unique feature of Kangaroo is its data-consistency protocol. Unlike many remote file services, Kangaroo provides no confirmation of write operations until the client issues an explicit commit or push command. The former forces data to stable storage at the nearest server, while the latter blocks until it is visible at its destination. PFS issues a commit when the application exits and optionally performs push in response to an fsync. 5. Error Handling Error handling has not been a pervasive problem in the design of traditional operating systems. As new models of file interaction have developed, their attending error modes have been added to existing systems by expanding the software interface at every level. For example, the addition of NFS [32] to the Unix kernel created the new possibility of a stale file handle, represented by the ESTALE error. As this error mode was discovered at the very lowest layers of the kernel, the value was added to the device driver interface, the file system interface, the standard library, and expected to be handled directly by applications. We have no such luxury in PFS. Applications use the existing POSIX interface, and we have no desire or facility for changing it. Yet, the underlying device drivers generate errors ranging from the vague “file system error” to the bizarre “server’s credentials have expired.” How should the unlimited space of errors in the lower layers be transformed into the fixed space of errors available to POSIX? Before we answer this question, we must remind the reader of our application domain. PFS was motivated by the need for scientific applications to access a variety of storage devices from a high-throughput batch execution system. In such a context, there are already many ways for a job to fail without the help of the I/O system: the submitter may lose contact with the execution site; the execution site may crash; the job may be forcibly evicted by the machine owner; and so on. Regardless of what may happen to the job, it is the responsibility of another process to oversee its progress and restart it if it should fail. Therefore, it is no disaster to kill the job when no other course seems reasonable. This is not to say that killing the process is always the best solution. Rather, we must perform triage – some injured processes are not worth the trouble to save. We may divide errors into three general categories: 1. A transformable error may easily be converted into a form that is both honest and recognizable by the application. Such errors are converted into an appropriate 2. A permanent error indicates that the process has a fatal flaw and cannot possibly run to completion. With this type of error, PFS must halt the process in a way that makes it clear the CPU system must not reschedule it. 3. A transient error indicates the process cannot run here and now, but has no inherent flaw. When encountering transient errors the I/O system must interact with the CPU system. It must indicate that the job is to release the CPU, but would like to execute again later and retry the I/O operation. The handling of errors requires interaction between CPU and I/O managers. In order to handle both permanent and transient errors correctly, the I/O system must inform the CPU system exactly what the next course of action for the process must be. In the case of the permanent error, the I/O system must forcibly halt the process in a manner that cannot be misinterpreted by the CPU manager. In most batch systems, this is accomplished by terminating normally with a non-zero exit code. In the case of a transient error, the situation is more complex. We would like to attach complex conditions to the restart of a process. For example, restart could be triggered by the arrival of a file or the completion of another process. However, we may minimally satisfy our need with a simple hook for yielding the CPU and allowing another process to be scheduled in the batch system. In Condor, this is accomplished by terminating abnormally with a signal indicating outside interference. (i.e. SIGKILL) The batch system will retrack the process and reschedule it at some future time on another CPU. We must emphasize the difference between local CPU management and batch CPU management. In response to a transient error, PFS could simply block or sleep until the necessary data are available. This would indeed cause the running process to release the CPU and move to a wait state in the local operating system scheduler. However, what the process is actually doing with the CPU is irrelevant to the distributed batch system. Unless the program issues some explicit instruction to the batch system, it still is in possession of the CPU. It will continue to be charged (either in money or priority) for consuming the resource, whether it is actually consuming physical cycles or not. Each of the three types of errors come from two distinct sources of errors. A mismatch of requests occurs when the target system does not have the needed capability. A mismatch of results occurs when the target system is capable, but the result has no obvious meaning to the application. Let's consider each in turn. 5.1. Mismatched Requests Our first difficulty comes when a device driver provides no support whatsoever for an operation requested by the application. We have three different solutions to this problem, based on our expectation of the application's ability to handle an error. Representative examples are **unlink**, **lseek**, and **stat**. Read-only services such as HTTP do not allow files to be deleted. A call to **unlink** a file cannot possibly succeed. Such a failure may be trivially represented to the calling application as "permission denied" or "read-only filesystem" without undue confusion by the user. Applications understand that **unlink** may fail for any number of other reasons on a normal filesystem, and are thus prepared to understand and deal with such errors. In contrast, almost no applications are prepared for **lseek** to fail. It is generally understood that any file accessed through **open** may be accessed randomly, so few (if any) applications even bother to consider the return value of **lseek**. If we use **lseek** on an FTP server that does not implement random access through the **REST** command, we risk any number of dangers by allowing a never-checked command to fail. Therefore, an attempt to seek on a non-seekable file results in a permanent error, returning the job to the user with an message on the standard error stream. The **stat** command offers the most puzzling difficulty of all. **stat** simply provides a set of meta-data about a file, such as the owner, access permissions, size, and last modification time. The problem is that few remote storage systems provide all, or even most, of this data. For example, FTP provides a file's size, but no other meta-data in a standard way. 2 We initially caused **stat** to report "permission denied" on such systems, indicating the data were not available. But to our surprise, this caused a large majority of programs to fail. **stat** is a very frequent operation used by command-line tools, large applications, and even the standard I/O library. We were quite dismayed at this discovery, because it seemed the necessary information simply could not be extracted from most remote I/O systems. However, a brief investigation into the actual uses of **stat** gave some cause for hope. Here are some of its major applications: - **Cataloging.** Commands such as **ls** and program elements such as file dialogs use **stat** to annotate lists of files with all possible detail for the interactive user's edification. A de-facto standard in FTP is the **LIST -i** command, which usually provides a detailed UNIX file list, actually performing a **stat** on every file in a directory. However, this cannot be relied upon, as each server provides a slightly different selection of attributes in a slightly different format. Further, not all servers are UNIX-like, and even those that are have no obligation to produce such output. Device Number = 0 Index Number = Incremented at every call Permissions = RWX by anyone Number of Links = 1 User = The calling user Group = The group of the calling user Size = 0 Block Size = User-configurable Blocks = 0 Last Access Time = Current time Last Mod. Time = Current time Last Change Time = Current time Figure 6. Default Results of Stat - **Optimization.** The standard C library, along with many other tools, uses `stat` to retrieve the optimal block size to be used with an I/O device. This is used to choose the buffer size for the ANSI buffered I/O interface. - **Short-circuiting.** Many programs and libraries, including the command-line shell and the Fortran standard library, use `stat` or `access` as a quick way to check the presence or validity of a file before actually performing an expensive `open` or `exec`. - **Unique identity.** Tools such as `cp`, which copy one file to another, use the unique device and file numbers returned by `stat` to determine if two file names refer to the same physical file. In each of these cases, there is very little harm in presenting default, or even guessed information. No program can rely on the values returned by `stat` because it cannot be done atomically with any other operation. If a program uses `stat` to measure the existence or size of a file, it must still be prepared for `open` or `read` to return conflicting information. Therefore, we may fill the response to `stat` with benevolent lies that encourage the program to continue, whether for reading or writing. Each device driver fills in whatever values in the structure it is able to determine, and then fills the rest with defaults shown in Figure 6. Of course, if the device driver can inexpensively determine that the file actually does not exist (i.e. the FTP SIZE command or the Chirp `Status` command) then it may truthfully cause `stat` to fail. The block size field shown in Figure 6 deserves special mention. In practice, the actual block size of the underlying device is irrelevant to the file abstraction. As we mentioned, it is instead used as an optimization parameter. The optimal block size for a remote protocol may be more of a property of the network and local environs than the remote storage device itself. So, PFS allows a file’s blocksize to be chosen by the user through an environment variable, allowing the buffered I/O interface to seamlessly adapt to protocols requiring a large I/O granularity. 5.2. Mismatched Results Several device drivers have the necessary machinery to carry out all of a user’s possible requests, but provide vague errors when a supported operation fails. For example, the FTP driver allows an application to read a file via the `GET` command. However, if the GET command fails, the only available information is the error code 550, which encompasses almost any sort of file system error including “no such file,” “access denied,” and “is a directory.” The POSIX interface does not permit a catch-all error value – it requires a specific reason. Which error code should be returned to the application? One technique for dealing with this problem is to interview the service in order to narrow down the cause of the error. This is similar to a standard expert system or the functional error-interview system described in [13]. Figure 7 shows the interview tree for a GET operation. If the GET should fail, we assume the named file is actually a directory and attempt to change to it. If that succeeds, the error is “not a file.” Otherwise, we attempt to LIST the named file. If that succeeds, the file is present but inaccessible, so the error is “access denied.” If it fails, the error is finally “no such file.” The error interview technique also has some drawbacks. It significantly increase the latency of failed operations, although it is generally not necessary to optimize error cases. In addition, the technique is not atomic, so it may determine an incorrect value of the remote filesystem is simulta- neously modified by another process. There is also very large space of infrequent errors that simply have no expression at all in the application's interface. A NeST might inform PFS via Chirp that its disk allocation has expired and been deleted. PFS might discover that the connection to a Kangaroo server has been broken by a network failure. An FTP server may inform PFS that the backing storage is offline. User credentials, such as Kerberos or GSI certificates, may expire, and no longer be valid. Of course, there are many well-known techniques for hiding such errors. Lots may be re-allocated, lost connections may be rebuilt, storage may come online again, and certificates might be renewed by the user. However, all of these techniques take time and computing resources and have no guarantee of eventual success. At some point, we must accept that an error has occurred. There is no honest way to report such errors to the application. Reporting “no such file” or “access denied” does not give the application the information necessary to recover when the true problem lies elsewhere. This represents a transient error that must be handled or re-tried by a higher layer of software. In these cases, PFS forces the process to exit abnormally. 6. Other Complications A number of other situations arose in the development of PFS which deserve elaboration for builders of similar systems. These include the sharing of state between processes, the initialization of complex programs, and signal propagation in interposition agents. 6.1. Process Creation So far, we have concentrated on the problem of serving a single process. PFS also works across the creation of new processes. This is most useful in the context of an interactive shell, which may create connections to remote devices, and pass them implicitly as the standard input and output streams of a new process. In a standard operating system, this is quite simple. Because all I/O structures are in the kernel, they are trivially shared between all processes. Things are more complicated in PFS, where each process must have its own I/O structures, yet still be able to share remote files. We cannot rely on the simple file descriptor inheritance provided by the operating system, because it preserves the wrong level of detail. For example, a device driver may hold a socket open to an FTP server. A child process cannot simply share this socket without harming the parent’s interaction with the server. Instead, it must create a new connection by reopening at the highest layers of PFS. Figure 8. Sharing a File Between Processes To accomplish this, PFS serializes all of its state when creating a new process. The child process is given an environment variable describing the state of all data structures, ranging from file descriptors down to device drivers. As the instance of PFS in the child process initializes, it re-creates the state of the parent by building all of the necessary data structures and re-opening files via the device drivers. This technique allows the child process to begin with the same file state as the parent had when the child was created. However, as both processes run, they must continue to share some state in order that they may interact. The simplest and most vital sharing is that of file pointers. A parent frequently creates a child with a shared output stream. If the child produces some output, thus advancing the current seek pointer, the parent must see the changes in order that its output may append to the child’s, rather than overwriting it. We may rely on the host OS for a solution to this problem. Before creating a new process, PFS allocates a shared csp from the OS by creating a dummy temporary file and then deleting it while still open. This file descriptor may be used for recording the csp of a file externally in a way visible to all processes that share it. It is also automatically de-allocated when the last process exits. This shared csp is distinct from any underlying file descriptors necessary to perform data access. For example, a file accessed by HTTP and shared between two processes has one file descriptor in use as a shared csp and another file descriptor in use as the network connection to the HTTP server. The former is shared while the latter is private. The actual sharing of data between files occurs at the remote service itself. Two related processes writing to a file will synchronize their position with a shared csp, but the actual combination of their write operations occurs at the remote service. 6.2. Program Initialization Program initialization is a very complicated matter. Many programs rely on a portion of their code to be executed before the program's formal entry point (main) or after the program exits. These hooks include C++ constructors and destructors, the ANSI atexit system, and shared library initializers and finalizers. Originally, PFS relied on C++ constructors and destructors to create and clean up all of the data structures necessary to support an application's I/O. This created a very puzzling intermittent problem. Depending on the operating system, compiler, and the time of day, a process coupled with PFS would mysteriously crash before it reached its entry point or after it exited. The problem was in the ordering of global constructors. Specifically, there wasn't one. A conforming C++ system calls all of the global constructors in different translation units in any order it likes. Thus, there was no guarantee that PFS would initialize its state before the application. If an application's global constructor performed I/O and happened to be executed before PFS's constructor, a crash would occur. The solution was to simply make PFS self-initializing. A static variable records whether the necessary data structures have been created. At the first call to any sort of I/O operation, the state is checked and the system is initialized. Thus, constructors could be called in any order with respect to PFS. We submit that the general technique of application code executed implicitly by the system linker or loader is a bad idea. In fact, the otherwise dense ANSI C++ standard devotes several pages to this troublesome problem. (See Section 3.4 in [14]) Nearly any sort of complex initialization code has a dependency on another subsystem that also needs initialization. For example, the standard I/O library certainly depends on the standard memory allocator. To make computer systems work reliably, an ordering on initialization must be imposed. This could be accomplished by the system linker (or loader). At build time, the programmer could give a list of what systems are to be initialized in what order. However, engineering a large piece of software is complex enough already. We are very loath to suggest adding a dependency between the program code and the options given to the system linker. A better solution is to make all systems self-initializing. For example, all entry points to the standard I/O library may check to see if the standard streams have been initialized. There are well-known solutions to make such code thread-safe. If self-initialization causes an unnecessary overhead, then the subsystem may be initialized by an explicit call at the beginning of the program. One might argue that such explicit initialization is an unnecessary burden on the programmer, and implicit initialization simplifies the program. We disagree. We have already shown that an explicit initialization list is necessary for program correctness. It is better to place it in the program itself as a portable system-independent list of initializers than to place it in the linker or loader, where it is sure to be non-portable. 6.3. Signal Handling In the concluding remarks of our paper describing Bypass, [35], we note that signal handling is not yet integrated with the interpositioning technique. Bypass uses a current layer pointer to track where the program is in the stack composed by the application, interposition agents, and standard library. The arrival of a signal (incorrectly) does not affect the current layer pointer. Therefore, a signal-handling function may call code in an incorrect layer. If this is not clear, consider the same problem in another context. An operating system relies on the underlying hardware to switch to supervisor mode when a device interrupt is raised. Bypass does not switch to supervisor mode (i.e. the PFS layer) when a software signal is raised. PFS relies on the correct operation of signals to implement memory-mapped files. To work around this limitation, PFS uses two hooks exposed by Bypass to manipulate the current layer pointer directly. When establishing a memory mapped file, PFS saves the layer pointer corresponding to itself. Upon entering the page fault handler, PFS saves the current layer pointer, and temporarily makes itself the current layer. It may then service the page fault using all of its machinery and the standard library below. Finally, the current layer pointer is restored before exiting the handler. 7. Related Work Distributed batch systems are perhaps one of the oldest general-purpose applications of distributed computing. Many, such as the Cambridge Ring [27], evolved as a response to the expense of centralized computing systems. Today, a large number of such systems are deployed at commercial and academic sites, including Condor [25], LoadLeveler [1] (a descendant of Condor), LSF [38], Maui [22], NQE [3], and PBS [20]. Several application-specific batch systems have been built to solve specific problems, such as SETI@Home [2] and the RC5 [23] challenges. Distributed I/O has traditionally been the realm of file systems. However, we must emphasize that the formula of a distributed file system as kernel-provided resource does not match the environment of mistrust and minimalism present in most distributed batch systems. When borrowing CPU time from a remote (and possibly anonymous) machine owner, it is simply not possible to request changes in the kernel. I/O systems accessible to user-space processes have generally fallen into two categories: Protocols and systems such as HTTP [15], FTP [28], GridFTP [5], and GASS [9] have cast themselves as protocols and interfaces for high-throughput whole-file movement. Other systems and protocols such as Condor remote I/O [25], RIO [18], Chirp [8], and Kangaroo [33] perform fine-grained access and remote files without extensive caching or transfer overhead. Both models work well with PFS. Of course, systems other than POSIX may sit at the center of the hourglass. Java, MPI, and PVM all have significant user communities and have found support in various batch systems such as Condor [36, 37, 29]. Although this paper relies on POSIX for concrete examples, much of it applies to other execution systems. We have already discussed some I/O problems unique to Java in [36]. The term interposition agent was coined by Michael Jones. A number of techniques for a building interposition agents have been devised, and an excellent review is given by Alexandrov, et al. in conjunction with the UFO [4] system. Other general-purpose toolkits include Detours [21], mediating connectors [6], and SLIC [19]. Component systems such as Knit [31] solve the problem of inter-component initialization by making component dependencies explicit to the linker. Multiplexing of I/O devices is a common technique and is seen in devices as diverse as the in-kernel Virtual Filesystem Switch (VFS) [24], the user-level Uniform I/O Interface (UIO) [10], and the server-side NeST [8] multi-protocol layer. Multiplexing is found in many other contexts such as the Gram [12] interface to batch execution, the Proteus [11] multiprotocol message library, and the Ace [30] system language for customizable shared-memory algorithms. Despite the ubiquity of this technique, we are not aware of any detailed treatment regarding failures and interface mismatches when forced to use existing interfaces. The closest such discussion is a report by Craig Metz [26] on the correct use of the multiplexed Berkeley sockets interface. 8. Conclusion The Pluggable File System has been used in a variety of research settings in the Condor project, including research into I/O communities [34], migratory file systems [7], and distributed buffering [33]. It continues to be a key component of our toolkit for research in distributed systems. We plan to gradually expand PFS to production settings and add support for more storage drivers. Our contribution is an illumination of a unique aspect of software engineering: error management. Although interpositioning and multiplexing are standard techniques, the emphasis is usually placed upon the transformation of requests and not the interpretation of the results. The results of file system operations contain important information and cannot be casually discarded. We have emphasized the importance of the narrow communication channel between I/O systems and CPU systems. Although both are designed in isolation, they require a certain level of integration in order to operate correctly. PFS has a relatively simple interaction with CPU managers through such operations as exit and kill. If designed carefully, a richer interface would allow for powerful interactions while preserving the design independence of I/O and CPU systems. Manuals, software, and more details about the Pluggable File System may be found at http://www.cs.wisc.edu/~condor/pfs. References
{"Source-Url": "http://research.cs.wisc.edu/techreports/2002/TR1448.pdf", "len_cl100k_base": 9364, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 16002, "total-output-tokens": 12150, "length": "2e13", "weborganizer": {"__label__adult": 0.00029730796813964844, "__label__art_design": 0.00041365623474121094, "__label__crime_law": 0.0002987384796142578, "__label__education_jobs": 0.0011768341064453125, "__label__entertainment": 0.0001118779182434082, "__label__fashion_beauty": 0.00016057491302490234, "__label__finance_business": 0.00027632713317871094, "__label__food_dining": 0.00032520294189453125, "__label__games": 0.0006246566772460938, "__label__hardware": 0.00311279296875, "__label__health": 0.0004703998565673828, "__label__history": 0.00039505958557128906, "__label__home_hobbies": 0.00010436773300170898, "__label__industrial": 0.0005478858947753906, "__label__literature": 0.00032591819763183594, "__label__politics": 0.0002532005310058594, "__label__religion": 0.0004978179931640625, "__label__science_tech": 0.2015380859375, "__label__social_life": 9.644031524658204e-05, "__label__software": 0.0245361328125, "__label__software_dev": 0.76318359375, "__label__sports_fitness": 0.00023794174194335935, "__label__transportation": 0.0006585121154785156, "__label__travel": 0.0002036094665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52468, 0.02411]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52468, 0.4712]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52468, 0.91974]], "google_gemma-3-12b-it_contains_pii": [[0, 141, false], [141, 3019, null], [3019, 8119, null], [8119, 8171, null], [8171, 13921, null], [13921, 17512, null], [17512, 22776, null], [22776, 28330, null], [28330, 32331, null], [32331, 35900, null], [35900, 41422, null], [41422, 46686, null], [46686, 52468, null], [52468, 52468, null]], "google_gemma-3-12b-it_is_public_document": [[0, 141, true], [141, 3019, null], [3019, 8119, null], [8119, 8171, null], [8171, 13921, null], [13921, 17512, null], [17512, 22776, null], [22776, 28330, null], [28330, 32331, null], [32331, 35900, null], [35900, 41422, null], [41422, 46686, null], [46686, 52468, null], [52468, 52468, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52468, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52468, null]], "pdf_page_numbers": [[0, 141, 1], [141, 3019, 2], [3019, 8119, 3], [8119, 8171, 4], [8171, 13921, 5], [13921, 17512, 6], [17512, 22776, 7], [22776, 28330, 8], [28330, 32331, 9], [32331, 35900, 10], [35900, 41422, 11], [41422, 46686, 12], [46686, 52468, 13], [52468, 52468, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52468, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d2f2b29f44431f0b0dbb69e19d17f81c4e030f44
Web Server Design Lecture 1 – Introduction to HTTP Old Dominion University Department of Computer Science CS 431/531 Fall 2022 Sawood Alam <salam@cs.odu.edu> 2022-08-31 Original slides by Michael L. Nelson Want to do this? https://www.youtube.com/watch?v=RJl__WfU5rE It will be better/safer if you know this... Want to do this? Twitter Developer Documentation Docs / REST APIs / Reference Documentation / GET search/tweets **GET search/tweets** Returns a collection of relevant Tweets matching a specified query. Please note that Twitter’s search service and, by extension, the Search API is not meant to be an exhaustive source of Tweets. Not all Tweets will be indexed or made available via the search interface. In API v1.1, the response format of the Search API has been improved to return Tweet objects more similar to the objects you’ll find across the REST API and platform. However, perspectival attributes (fields that pertain to the perspective of the authenticating user) are not currently supported on this endpoint. To learn how to use Twitter Search effectively, consult our guide to Using the Twitter Search API. See Working with Timelines to learn best practices for navigating results by `since_id` and `max_id`. **Resource URL** https://api.twitter.com/1.1/search/tweets.json It will be better/safer if you know this... $ telnet www.cs.odu.edu 80 | tee 6-1.out Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. HEAD /~mln/teaching/cs595-s06/a1-test/ HTTP/1.1 Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Sun, 12 Feb 2006 20:58:49 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Content-Type: text/html HEAD /~mln/teaching/cs595-s06/a1-test/1/ HTTP/1.1 Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Sun, 12 Feb 2006 20:58:55 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Content-Type: text/html HEAD /~mln/teaching/cs595-s06/a1-test/2/ HTTP/1.1 Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Sun, 12 Feb 2006 20:59:01 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Last-Modified: Sun, 29 Jan 2006 18:43:15 GMT ETag: "1f4de2-790-43dd0cc3" Accept-Ranges: bytes Content-Length: 1936 Content-Type: text/html X-Pad: avoid browser bug Connection closed by foreign host. Goals • We will write a web (HTTP) server from scratch – we will not use Apache, IIS, Nginx, or other existing web servers – the point is to learn basic HTTP and have a working server at the end of the class • your server won’t be as “good” as Apache -- and that’s ok… • We will use industry standard tools/environments/systems/etc. – GitHub/Git – Docker I’m not teaching Web Application Development • If you want to learn LAMP, you need to take Dr. Jian Wu’s 418/518 (Web Programming) class Instead of LAMP, you’ll be learning the basis of: REST: Representational State Transfer & HATEOAS: Hypermedia as the Engine of Application State To Reiterate: CS 418/518 – Make it Pretty https://www.hotrod.com/articles/fairlane-finale-finish-2016-road-tour-ford/ CS 431/531 – Under the Hood https://www.hotrod.com/articles/ccrp-0808-ford-390-fe/ REST vs. RPC RPC: foo.com/bigApp.jsp?verb=showThing&id=123 REST: foo.com/things/123 (w/ GET method) RPC: foo.com/bigApp.jsp?verb=editThing&id=123 REST: foo.com/things/123 (w/ PUT method) RPC: foo.com/bigApp.jsp?verb=newThing REST: foo.com/things/ (w/ POST method) Quick-n-dirty summary: in REST, URIs are *nouns* and HTTP provides the *verbs* this will make more sense as we go through the semester, and there’s actually a lot more to REST: https://research.google.com/pubs/archive/46310.pdf Defining the Web / HTTP • HTTP was originally defined by Request for Comments (RFCs) 1945, 2068, 2616 – and several others for defining URLs, URIs, etc. • Venerable RFC 2616 was replaced in 2014 with: – RFC7230 - HTTP/1.1: Message Syntax and Routing - low-level message parsing and connection management – RFC7231 - HTTP/1.1: Semantics and Content - methods, status codes and headers – RFC7232 - HTTP/1.1: Conditional Requests - e.g., If-Modified-Since – RFC7233 - HTTP/1.1: Range Requests - getting partial content – RFC7234 - HTTP/1.1: Caching - browser and intermediary caches – RFC7235 - HTTP/1.1: Authentication - a framework for HTTP authentication – see: https://www.mnot.net/blog/2014/06/07/rfc2616_is_dead • Further refactored and replaced in 2022 – RFC9110 - HTTP Semantics - those core, versionless semantics – RFC9111 - HTTP Caching - split into a separate document for convenience, but also versionless – RFC9112 - HTTP/1.1 - everything that’s specific to that textual wire protocol – see: https://www.mnot.net/blog/2022/06/06/http-core • We also have a slightly revisionist but ultimately useful unifying document, ca. 2004: Uniform Resource Identifiers URI & URL: http://www.cs.odu.edu/ URL: ftp://ftp.isi.edu/pub/ URI: info:pmid/12376099 URN: urn:uuid:6e8bc430-9c3a-11d9-9669-0800200c9a66 “A URI can be further classified as a locator, a name, or both. The term "Uniform Resource Locator" (URL) refers to the subset of URIs that, in addition to identifying a resource, provide a means of locating the resource by describing its primary access mechanism (e.g., its network "location"). The term "Uniform Resource Name" (URN) has been used historically to refer to both URIs under the "urn" scheme [RFC2141], which are required to remain globally unique and persistent even when the resource ceases to exist or becomes unavailable, and to any other URI with the properties of a name.” URIs & URNs • Registered URI schemes – http://www.iana.org/assignments/uri-schemes • Registered URN namespaces – http://www.iana.org/assignments/urn-namespaces URI Schemes foo://username:password@example.com:8042/over/there/index.dtb;type=animal?name=ferret#nose note: “scheme”, not “protocol” The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 1. MUST This word, or the terms "REQUIRED" or "SHALL", mean that the definition is an absolute requirement of the specification. 2. MUST NOT This phrase, or the phrase "SHALL NOT", mean that the definition is an absolute prohibition of the specification. 3. SHOULD This word, or the adjective "RECOMMENDED", mean that there may exist valid reasons in particular circumstances to ignore a particular item, but the full implications must be understood and carefully weighed before choosing a different course. 4. SHOULD NOT This phrase, or the phrase "NOT RECOMMENDED" mean that there may exist valid reasons in particular circumstances when the particular behavior is acceptable or even useful, but the full implications should be understood and the case carefully weighed before implementing any behavior described with this label. 5. MAY This word, or the adjective "OPTIONAL", mean that an item is truly optional. One vendor may choose to include the item because a particular marketplace requires it or because the vendor feels that it enhances the product while another vendor may omit the same item. An implementation which does not include a particular option MUST be prepared to interoperate with another implementation which does include the option, though perhaps with reduced functionality. In the same vein an implementation which does include a particular option MUST be prepared to interoperate with another implementation which does not include the option (except, of course, for the feature the option provides.) Important Web Architecture Concepts URIs http://www.cs.odu.edu/~mln/ Resources Representations* As defined by the Web Architecture http://www.w3.org/TR/webarch/ *= “message” or “message body” in RFC 7231, “entity”/“entity-body” in RFC-2616 Resources can have multiple, simultaneous representations HTTP Operation Client -> Request-line, Header Fields, Whitespace, Message Body -> Origin Origin -> Status-line, Header Fields, Whitespace, Message Body -> Server Client: Method URI HTTP/1.1 Some-Request-Header-1: value1 Some-Request-Header-2: value2 ... (1st magic blank line) Server: HTTP/1.1 Code String Some-Response-Header-1: value1 Some-Response-Header-2: value2 ... (2nd magic blank line) message-body Client’s “request-line” and Server’s “status-line” are the format exceptions; otherwise headers are in a flat, key-value syntax, followed by a blank line, followed by an optional message-body. Modern Browsers (aka “user-agents”) are nice… But they hide important details from us. As programmers, we care about those details. Talking to HTTP servers with “curl” $ curl --head http://www.cs.odu.edu/~mln/ HTTP/1.1 200 OK Date: Mon, 12 Jan 2009 15:44:19 GMT Server: Apache/2.2.0 Last-Modified: Fri, 09 Jan 2009 17:18:37 GMT ETag: "88849-1c71-f28dd540" Accept-Ranges: bytes Content-Length: 7281 Content-Type: text/html $ curl -I http://www.google.com/ HTTP/1.1 200 OK Cache-Control: private, max-age=0 Date: Mon, 12 Jan 2009 15:45:57 GMT Expires: -1 Content-Type: text/html; charset=ISO-8859-1 Set-Cookie: PREF=ID=9a80d3f602b685f3:TM=1231775157:LM=1231775157:S=imGxRyNsTD0Zczm5; expires=Wed, 12-Jan-2011 15:45:57 GMT; path=/; domain=.google.com Server: gws Content-Length: 0 default curl returns message body, no headers... $ curl https://www.cs.odu.edu/~mln/ <html> <head> <title> Home:: Michael L. Nelson, Old Dominion University </title> <!-- CSS stuff largely stolen from Carl Lagoze's Page --> <link rel="stylesheet" type="text/css" href="mln.css"/> <meta property="fb:admins" content="michael.lloyd.nelson"/> <meta property="og:title" content="Michael L. Nelson"/> [lots of html removed] curl –i shows response headers + message body: $ curl -i https://www.cs.odu.edu/~mln/ HTTP/1.1 200 OK Server: nginx Date: Wed, 29 Aug 2018 02:34:15 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Front-End-Httsp: on <html> <head> <title> Home:: Michael L. Nelson, Old Dominion University </title> [deletia] * Adding handle: conn: 0x7fa59b004000 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 0 (0x7fa59b004000) send_pipe: 1, recv_pipe: 0 * About to connect() to ws-dl.blogspot.com port 80 (#0) * Trying 172.217.5.65... * Connected to ws-dl.blogspot.com (172.217.5.65) port 80 (#0) > GET /2018/08/2018-08-25-four-ws-dl-classes-offered.html HTTP/1.1 > User-Agent: curl/7.30.0 > Host: ws-dl.blogspot.com > Accept: */* > < HTTP/1.1 200 OK < Content-Type: text/html; charset=UTF-8 < Expires: Wed, 29 Aug 2018 01:28:50 GMT < Date: Wed, 29 Aug 2018 01:28:50 GMT < Cache-Control: private, max-age=0 < Last-Modified: Tue, 28 Aug 2018 23:33:07 GMT < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block * Server GSE is not blacklisted < Server: GSE < Accept-Ranges: none < Vary: Accept-Encoding < Transfer-Encoding: chunked < <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html dir='ltr' xmlns='http://www.w3.org/1999/xhtml' xmlns:b='http://www.google.com/2005/gml/b' <head> [much deletia] curl has many, many flags... wget crawls and saves sites ACC football injury reports no more; national standard likely Justin Fuente on injury reports Hokies coach Justin Fuente advocates a uniform injury reporting policy for all of college football. (AP photo via Roanoke Times.) Hokies coach Justin Fuente advocates a uniform injury reporting policy for all of college football. (AP photo via Roanoke Times.) Josh Jackson, Ricky Walker and Yoshua Nijman are presumably healthy for Virginia Tech’s football season opener at Florida State on Monday night. They’ve practiced throughout training camp and answered questions from reporters Sunday. But what if they, or any player from either team, sustained an injury this week and was doubtful or out for Monday? We might not know until near kickoff. And that’s too bad. It’s not outrageous or shameful, mind you, but it is another strike against transparency. Not that coaches should reveal game plans or players should forfeit the legal privacy protections. But from 2010 through last season, ACC football programs released injury reports two days before conference games. Civilization survived. Rights weren’t compromised. Championships weren’t altered. Indeed, ACC football is stronger than ever. The reports informed fans and media, not to mention - wink, wink - oddsmakers and legions of gamblers. But ACC coaches voted this offseason to discontinue their gentlemen’s agreement – the injury reports were not mandated by conference policy – and I don’t blame them. First, the ACC was the only league issuing injury reports. Second, there was no uniform format, giving schools discretion on whether to reveal an injury’s nature (knee, hip, ankle, etc.). Third, some coaches fudged. Moreover, of the 12 ACC coaches who adopted the injury reports in 2010, only Georgia Tech’s Paul Johnson, Duke’s David Cutcliffe and Clemson’s Dabo Swinney are still working in the conference. Somewhat in jest, I asked Virginia Tech coach Justin Fuente if he and his colleagues just don’t trust one another. curl/wget/lynx are awesome but they are still user-agents, and the nature of user-agents is to hide details. we’ll frequently use “telnet” or “openssl” to expose details $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. GET /~mln/index.html HTTP/1.1 Connection: close Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Mon, 09 Jan 2006 17:07:04 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Last-Modified: Sun, 29 May 2005 02:46:53 GMT ETag: "1c52-14ed-42992d1d" Accept-Ranges: bytes Content-Length: 5357 Connection: close Content-Type: text/html <html> <head> <title>Home Page for Michael L. Nelson</title> <style type="text/css"> <!-- [ lots of html deleted] </style> </head> Connection closed by foreign host. $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. HEAD /~mln/index.html HTTP/1.1 Connection: close Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Mon, 09 Jan 2006 17:14:39 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Last-Modified: Sun, 29 May 2005 02:46:53 GMT ETag: "1c52-14ed-42992d1d" Accept-Ranges: bytes Content-Length: 5357 Connection: close Content-Type: text/html Connection closed by foreign host. OPTIONS (many methods) $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. OPTIONS /~mln/index.html HTTP/1.1 Connection: close Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Mon, 09 Jan 2006 17:16:46 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Content-Length: 0 Allow: GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, PATCH, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK, TRACE Connection: close Connection closed by foreign host. OPTIONS (fewer methods) $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. OPTIONS /~mln/index.html HTTP/1.1 Connection: close Host: www.cs.odu.edu HTTP/1.1 200 OK Date: Tue, 10 Jan 2012 17:26:44 GMT Server: Apache/2.2.17 (Unix) PHP/5.3.5 mod_ssl/2.2.17 OpenSSL/0.9.8q Allow: GET,HEAD,POST,OPTIONS Content-Length: 0 Connection: close Content-Type: text/html Connection closed by foreign host. HTTPS is supplanting HTTP this is mostly a good thing* but it does mean we can’t use telnet for “https” sites $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^[]. HEAD /~mln/ HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 301 Moved Permanently Server: nginx Date: Wed, 29 Aug 2018 03:45:36 GMT Content-Type: text/html Connection: close Location: https://www.cs.odu.edu/~mln/ Connection closed by foreign host. $ telnet www.cs.odu.edu 443 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^[]. HEAD /~mln/ HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 400 Bad Request Server: nginx Date: Wed, 29 Aug 2018 03:45:57 GMT Content-Type: text/html Connection: close Connection closed by foreign host. hello “openssl to port 443” $ openssl s_client -connect www.cs.odu.edu:443 CONNECTED(00000003) [much, much SSL deletia] SSL handshake has read 6270 bytes and written 328 bytes --- New, TLSv1/SSLv3, Cipher is DHE-RSA-AES128-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : DHE-RSA-AES128-SHA Session-ID: E19FD48AA69A2969B958877C48C28391ED217761F1E2023C7471ACB89B2694 Session-ID-ctx: Master-Key: 0A9A3DC0C66F99FF85A480ADEC42A7EB74ECC1D391D9AF4A026CF27C16A19480C42A75B6CD283BFE68ADAB32D07D7242 Key-Arg : None Start Time: 1535514923 Timeout : 300 (sec) Verify return code: 0 (ok) --- HEAD /~mln/ HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 200 OK Server: nginx Date: Wed, 29 Aug 2018 03:55:35 GMT Content-Type: text/html Connection: close Vary: Accept-Encoding Front-End-Https: on closed HTTP semantics don’t change $ openssl s_client -connect www.cs.odu.edu:443 [all SSL portions deleted] OPTIONS /~mln/ HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 200 OK Server: nginx Date: Wed, 29 Aug 2018 04:02:05 GMT Content-Type: text/html Content-Length: 0 Connection: close Allow: POST,OPTIONS,GET,HEAD Front-End-Https: on closed Response Codes - **1xx: Informational** - The request was received, continuing process - **2xx: Success** - The action was successfully received, understood, and accepted - **3xx: Redirection** - Further action must be taken in order to complete the request - **4xx: Client Error** - The request contains bad syntax or cannot be fulfilled - **5xx: Server Error** - The server failed to fulfill an apparently valid request *not “error” codes!!!* $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. GET /index.html HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 501 Method Not Implemented Date: Mon, 09 Jan 2006 17:22:40 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Allow: GET, HEAD, POST, PUT, DELETE, CONNECT, OPTIONS, PATCH, PROPFIND, PROPPATCH, MKCOL, COPY, MOVE, LOCK, UNLOCK, TRACE Connection: close Transfer-Encoding: chunked Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>501 Method Not Implemented</title> </head><body> <h1>Method Not Implemented</h1> NOTAREALMETHOD to /index.html not supported.<p> Invalid method in request NOTAREALMETHOD /index.html HTTP/1.1</p> </body></html> Connection closed by foreign host. $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. GET /~mln HTTP/1.1 Connection: close Host: www.cs.odu.edu Connection closed by foreign host. HTTP/1.1 301 Moved Permanently Date: Mon, 09 Jan 2006 19:32:24 GMT Server: Apache/1.3.26 (Unix) ApacheJServ/1.1.2 PHP/4.3.4 Location: http://www.cs.odu.edu/~mln/ Connection: close Transfer-Encoding: chunked Content-Type: text/html; charset=iso-8859-1 12e <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> title="301 Moved Permanently" </head> <body> <h1>Moved Permanently</h1> The document has moved <a href="http://www.cs.odu.edu/~mln/">here</a>.<p> </p><hr/> <address>Apache/1.3.26 Server at www.cs.odu.edu Port 80</address> </body></html> 301- Moved Permanently $ telnet bit.ly 80 Trying 69.58.188.39... Connected to bit.ly. Escape character is '^]'. HEAD http://bit.ly/s2FPFa HTTP/1.1 Host: bit.ly Connection: close HTTP/1.1 301 Moved Server: nginx Date: Tue, 10 Jan 2012 17:34:29 GMT Content-Type: text/html; charset=utf-8 Connection: close Set-Cookie: _bit=4f0c76a5-002b9-048b1-331cf10a;domain=.bit.ly; expires=Sun Jul 8 17:34:29 2012;path=/; HttpOnly Cache-control: private; max-age=90 MIME-Version: 1.0 Content-Length: 125 the response code is REQUIRED; phrase is RECOMMENDED $ telnet doi.acm.org 80 Trying 64.238.147.57... Connected to doi.acm.org. Escape character is '^[]. HEAD http://doi.acm.org/10.1145/1998076.1998100 HTTP/1.1 Host: doi.acm.org Connection: close HTTP/1.1 302 Found Date: Tue, 10 Jan 2012 17:53:36 GMT Server: Apache/2.2.3 (Red Hat) Location: http://dl.acm.org/citation.cfm?doid=1998076.1998100 Connection: close Content-Type: text/html; charset=iso-8859-1 $ telnet dx.doi.org 80 Trying 38.100.138.149... Connected to dx.doi.org. Escape character is '^]'. HEAD http://dx.doi.org/10.1007/978-3-642-24469-8_16 HTTP/1.1 Host: dx.doi.org Connection: close HTTP/1.1 303 See Other Server: Apache-Coyote/1.1 Location: http://www.springerlink.com/index/10.1007/978-3-642-24469-8_16 Expires: Wed, 11 Jan 2012 12:04:29 GMT Content-Type: text/html; charset=utf-8 Content-Length: 210 Date: Tue, 10 Jan 2012 17:56:41 GMT Connection: close 404 - Not Found $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. HEAD /lasdkfjalsdkfjldaskfj HTTP/1.1 Host: www.cs.odu.edu Connection: close HTTP/1.1 404 Not Found Date: Tue, 10 Jan 2012 17:39:15 GMT Server: Apache/2.2.17 (Unix) PHP/5.3.5 mod_ssl/2.2.17 OpenSSL/0.9.8q Connection: close Content-Type: text/html; charset=iso-8859-1 Connection closed by foreign host. $ telnet www4.cs.odu.edu 80 Trying 128.82.5.93... Connected to www4.cs.odu.edu. Escape character is '^]'. HEAD http://www4.cs.odu.edu/Conference/index.aspx HTTP/1.1 Host: www4.cs.odu.edu Connection: close HTTP/1.1 401 Unauthorized Content-Length: 1656 Content-Type: text/html Server: Microsoft-IIS/6.0 WWW-Authenticate: Basic realm="www4.cs.odu.edu" MicrosoftOfficeWebServer: 5.0_Pub X-Powered-By: ASP.NET Date: Tue, 10 Jan 2012 17:43:57 GMT Connection: close 400 - Bad Request $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^[].' HEAD http://www.cs.odu.edu/~mln/ HTTP/1.1 Connection: close HTTP/1.1 400 Bad Request Date: Tue, 10 Jan 2012 18:24:17 GMT Server: Apache/2.2.17 (Unix) PHP/5.3.5 mod_ssl/2.2.17 OpenSSL/0.9.8q Connection: close Content-Type: text/html; charset=iso-8859-1 $ telnet www.cs.odu.edu 80 Trying 128.82.4.2... Connected to xenon.cs.odu.edu. Escape character is '^]'. HEAD / HTTP/9.9 Host: www.cs.odu.edu Connection: close HTTP/1.1 200 OK Date: Tue, 10 Jan 2012 17:40:05 GMT Server: Apache/2.2.17 (Unix) PHP/5.3.5 mod_ssl/2.2.17 OpenSSL/0.9.8q Accept-Ranges: bytes Connection: close Content-Type: text/html Connection closed by foreign host. our servers will be more picky! 505 - HTTP Version Not Supported % telnet www.w3c.org 80 Trying 128.30.52.45... Connected to dolph.w3.org. Escape character is '^[].' HEAD / HTTP/9.9 Host: www.w3c.org Connection: close HTTP/1.0 403 Forbidden Cache-Control: no-cache Connection: close Content-Type: text/html <html><body><h1>403 Forbidden</h1>Request forbidden by administrative rules.</body></html> a curious response… 505 not defined in HTTP 1.0! <table> <thead> <tr> <th>Code</th> <th>Reason-Phrase</th> <th>Defined in...</th> </tr> </thead> <tbody> <tr> <td>100</td> <td>Continue</td> <td>Section 6.2.1</td> </tr> <tr> <td>101</td> <td>Switching Protocols</td> <td>Section 6.2.2</td> </tr> <tr> <td>200</td> <td>OK</td> <td>Section 6.3.1</td> </tr> <tr> <td>201</td> <td>Created</td> <td>Section 6.3.2</td> </tr> <tr> <td>202</td> <td>Accepted</td> <td>Section 6.3.3</td> </tr> <tr> <td>203</td> <td>Non-Authoritative Information</td> <td>Section 6.3.4</td> </tr> <tr> <td>204</td> <td>No Content</td> <td>Section 6.3.5</td> </tr> <tr> <td>205</td> <td>Reset Content</td> <td>Section 6.3.6</td> </tr> <tr> <td>206</td> <td>Partial Content</td> <td>Section 4.1 of [RFC7233]</td> </tr> <tr> <td>300</td> <td>Multiple Choices</td> <td>Section 6.4.1</td> </tr> <tr> <td>301</td> <td>Moved Permanently</td> <td>Section 6.4.2</td> </tr> <tr> <td>302</td> <td>Found</td> <td>Section 6.4.3</td> </tr> <tr> <td>303</td> <td>See Other</td> <td>Section 6.4.4</td> </tr> <tr> <td>304</td> <td>Not Modified</td> <td>Section 4.1 of [RFC7232]</td> </tr> <tr> <td>305</td> <td>Use Proxy</td> <td>Section 6.4.5</td> </tr> <tr> <td>307</td> <td>Temporary Redirect</td> <td>Section 6.4.7</td> </tr> <tr> <td>400</td> <td>Bad Request</td> <td>Section 6.5.1</td> </tr> <tr> <td>401</td> <td>Unauthorized</td> <td>Section 3.1 of [RFC7235]</td> </tr> <tr> <td>402</td> <td>Payment Required</td> <td>Section 6.5.2</td> </tr> <tr> <td>403</td> <td>Forbidden</td> <td>Section 6.5.3</td> </tr> <tr> <td>404</td> <td>Not Found</td> <td>Section 6.5.4</td> </tr> <tr> <td>405</td> <td>Method Not Allowed</td> <td>Section 6.5.5</td> </tr> <tr> <td>406</td> <td>Not Acceptable</td> <td>Section 6.5.6</td> </tr> <tr> <td>407</td> <td>Proxy Authentication Required</td> <td>Section 3.2 of [RFC7235]</td> </tr> <tr> <td>408</td> <td>Request Timeout</td> <td>Section 6.5.7</td> </tr> <tr> <td>409</td> <td>Conflict</td> <td>Section 6.5.8</td> </tr> <tr> <td>410</td> <td>Gone</td> <td>Section 6.5.9</td> </tr> <tr> <td>411</td> <td>Length Required</td> <td>Section 6.5.10</td> </tr> <tr> <td>412</td> <td>Precondition Failed</td> <td>Section 4.2 of [RFC7232]</td> </tr> <tr> <td>413</td> <td>Payload Too Large</td> <td>Section 6.5.11</td> </tr> <tr> <td>414</td> <td>URI Too Long</td> <td>Section 6.5.12</td> </tr> <tr> <td>415</td> <td>Unsupported Media Type</td> <td>Section 6.5.13</td> </tr> <tr> <td>416</td> <td>Range Not Satisfiable</td> <td>Section 4.4 of [RFC7233]</td> </tr> <tr> <td>417</td> <td>Expectation Failed</td> <td>Section 6.5.14</td> </tr> <tr> <td>426</td> <td>Upgrade Required</td> <td>Section 6.5.15</td> </tr> <tr> <td>500</td> <td>Internal Server Error</td> <td>Section 6.6.1</td> </tr> <tr> <td>501</td> <td>Not Implemented</td> <td>Section 6.6.2</td> </tr> <tr> <td>502</td> <td>Bad Gateway</td> <td>Section 6.6.3</td> </tr> <tr> <td>503</td> <td>Service Unavailable</td> <td>Section 6.6.4</td> </tr> <tr> <td>504</td> <td>Gateway Timeout</td> <td>Section 6.6.5</td> </tr> <tr> <td>505</td> <td>HTTP Version Not Supported</td> <td>Section 6.6.6</td> </tr> </tbody> </table> 7.1.1.1. Date/Time Formats An example of the preferred format is Sun, 06 Nov 1994 08:49:37 GMT ; IMF-fixdate Examples of the two obsolete formats are Sunday, 06-Nov-94 08:49:37 GMT ; obsolete RFC 850 format Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format A recipient that parses a timestamp value in an HTTP header field MUST accept all three HTTP-date formats. When a sender generates a header field that contains one or more timestamps defined as HTTP-date, the sender MUST generate those timestamps in the IMF-fixdate format. An HTTP-date value represents time as an instance of Coordinated Universal Time (UTC). The first two formats indicate UTC by the three-letter abbreviation for Greenwich Mean Time, "GMT", a predecessor of the UTC name; values in the asctime format are assumed to be in UTC. A sender that generates HTTP-date values from a local clock ought to use NTP ([RFC5905]) or some similar protocol to synchronize its clock to UTC. Things to Think About for Your Server • Claim HTTP/1.1 – even though we’ll not fully satisfy all requirements • Configuration files – should not have to recompile or edit source code for trivial changes • MIME types – most servers use a separate file (specified in your config file!) to map file extensions to MIME types • Logging – real http servers log their events • we’ll use “common log format” – you’ll need logging for debugging • consider concurrent logs with varying verbosity More Things To Think About… • A resource is more than just a file in the file system – content negotiation is in your future – sometimes we’ll give respond with only a “slice” of a file – What does it mean to GET a directory? – eventually we’ll execute scripts In the future, some methods will allow a client to send an entity body to the server... Client: - Method URI HTTP/1.1 - Some-Request-Header-1: value1 - Some-Request-Header-2: value2 - ... (1st magic blank line) message-body Server: - HTTP 1.1 Code String - Some-Response-Header-1: value1 - Some-Response-Header-2: value2 - ... (2nd magic blank line) message-body Revisiting What You Will Learn • Fundamental knowledge about how http works – your future career is likely to involve web programming • Working with others, explaining your results to colleagues – in real life, tasks are rarely performed in isolation • How to read & interpret technical specifications and translate them into code – in real life, interesting problems are ambiguous & messy • Using GitHub/Git, Docker, AWS, and other modern tools • The importance of good, extensible design early in a software project – in real life, writing code from scratch is an uncommon luxury
{"Source-Url": "https://cs531-f22.github.io/slides/lecture-01-http.pdf", "len_cl100k_base": 9102, "olmocr-version": "0.1.53", "pdf-total-pages": 53, "total-fallback-pages": 0, "total-input-tokens": 64434, "total-output-tokens": 11785, "length": "2e13", "weborganizer": {"__label__adult": 0.0003657341003417969, "__label__art_design": 0.0006613731384277344, "__label__crime_law": 0.0003132820129394531, "__label__education_jobs": 0.00872802734375, "__label__entertainment": 0.0001418590545654297, "__label__fashion_beauty": 0.000213623046875, "__label__finance_business": 0.0004553794860839844, "__label__food_dining": 0.000400543212890625, "__label__games": 0.0005645751953125, "__label__hardware": 0.0008063316345214844, "__label__health": 0.00034689903259277344, "__label__history": 0.00028586387634277344, "__label__home_hobbies": 0.0001456737518310547, "__label__industrial": 0.0003733634948730469, "__label__literature": 0.00041413307189941406, "__label__politics": 0.0002532005310058594, "__label__religion": 0.00033855438232421875, "__label__science_tech": 0.0100250244140625, "__label__social_life": 0.0002846717834472656, "__label__software": 0.0198974609375, "__label__software_dev": 0.953125, "__label__sports_fitness": 0.0011491775512695312, "__label__transportation": 0.0005044937133789062, "__label__travel": 0.0002529621124267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29522, 0.08098]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29522, 0.11475]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29522, 0.75083]], "google_gemma-3-12b-it_contains_pii": [[0, 210, false], [210, 272, null], [272, 316, null], [316, 1308, null], [1308, 2263, null], [2263, 2632, null], [2632, 2916, null], [2916, 3119, null], [3119, 3688, null], [3688, 4941, null], [4941, 5111, null], [5111, 5705, null], [5705, 5871, null], [5871, 6059, null], [6059, 7792, null], [7792, 8038, null], [8038, 8096, null], [8096, 8260, null], [8260, 8729, null], [8729, 9047, null], [9047, 9695, null], [9695, 10115, null], [10115, 10479, null], [10479, 11763, null], [11763, 11792, null], [11792, 11820, null], [11820, 13861, null], [13861, 14033, null], [14033, 14641, null], [14641, 15118, null], [15118, 15624, null], [15624, 16074, null], [16074, 16277, null], [16277, 16962, null], [16962, 17869, null], [17869, 18219, null], [18219, 18670, null], [18670, 19499, null], [19499, 20255, null], [20255, 20856, null], [20856, 21260, null], [21260, 21730, null], [21730, 22155, null], [22155, 22616, null], [22616, 22994, null], [22994, 23408, null], [23408, 23827, null], [23827, 26812, null], [26812, 27771, null], [27771, 28283, null], [28283, 28553, null], [28553, 28932, null], [28932, 29522, null]], "google_gemma-3-12b-it_is_public_document": [[0, 210, true], [210, 272, null], [272, 316, null], [316, 1308, null], [1308, 2263, null], [2263, 2632, null], [2632, 2916, null], [2916, 3119, null], [3119, 3688, null], [3688, 4941, null], [4941, 5111, null], [5111, 5705, null], [5705, 5871, null], [5871, 6059, null], [6059, 7792, null], [7792, 8038, null], [8038, 8096, null], [8096, 8260, null], [8260, 8729, null], [8729, 9047, null], [9047, 9695, null], [9695, 10115, null], [10115, 10479, null], [10479, 11763, null], [11763, 11792, null], [11792, 11820, null], [11820, 13861, null], [13861, 14033, null], [14033, 14641, null], [14641, 15118, null], [15118, 15624, null], [15624, 16074, null], [16074, 16277, null], [16277, 16962, null], [16962, 17869, null], [17869, 18219, null], [18219, 18670, null], [18670, 19499, null], [19499, 20255, null], [20255, 20856, null], [20856, 21260, null], [21260, 21730, null], [21730, 22155, null], [22155, 22616, null], [22616, 22994, null], [22994, 23408, null], [23408, 23827, null], [23827, 26812, null], [26812, 27771, null], [27771, 28283, null], [28283, 28553, null], [28553, 28932, null], [28932, 29522, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29522, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29522, null]], "pdf_page_numbers": [[0, 210, 1], [210, 272, 2], [272, 316, 3], [316, 1308, 4], [1308, 2263, 5], [2263, 2632, 6], [2632, 2916, 7], [2916, 3119, 8], [3119, 3688, 9], [3688, 4941, 10], [4941, 5111, 11], [5111, 5705, 12], [5705, 5871, 13], [5871, 6059, 14], [6059, 7792, 15], [7792, 8038, 16], [8038, 8096, 17], [8096, 8260, 18], [8260, 8729, 19], [8729, 9047, 20], [9047, 9695, 21], [9695, 10115, 22], [10115, 10479, 23], [10479, 11763, 24], [11763, 11792, 25], [11792, 11820, 26], [11820, 13861, 27], [13861, 14033, 28], [14033, 14641, 29], [14641, 15118, 30], [15118, 15624, 31], [15624, 16074, 32], [16074, 16277, 33], [16277, 16962, 34], [16962, 17869, 35], [17869, 18219, 36], [18219, 18670, 37], [18670, 19499, 38], [19499, 20255, 39], [20255, 20856, 40], [20856, 21260, 41], [21260, 21730, 42], [21730, 22155, 43], [22155, 22616, 44], [22616, 22994, 45], [22994, 23408, 46], [23408, 23827, 47], [23827, 26812, 48], [26812, 27771, 49], [27771, 28283, 50], [28283, 28553, 51], [28553, 28932, 52], [28932, 29522, 53]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29522, 0.06287]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
95713c837a12eea170bd5758bcd09ba899724305
Open Code Governance Danielle Keats Citron Danielle.Citron@chicagounbound.edu Follow this and additional works at: http://chicagounbound.uchicago.edu/uclf Recommended Citation Citron, Danielle Keats () "Open Code Governance," University of Chicago Legal Forum: Vol. 2008: Iss. 1, Article 9. Available at: http://chicagounbound.uchicago.edu/uclf/vol2008/iss1/9 This Article is brought to you for free and open access by Chicago Unbound. It has been accepted for inclusion in University of Chicago Legal Forum by an authorized administrator of Chicago Unbound. For more information, please contact unbound@law.uchicago.edu. Open Code Governance Danielle Keats Citron† The legitimacy of the administrative state has troubled courts and scholars for many decades.1 Reformers have pursued several approaches. Public participation allays concerns that agency policymaking excludes divergent perspectives and may partially substitute for direct democratic control.2 Strong oversight by politically accountable actors enhances the democratic † Associate Professor of Law, University of Maryland School of Law. The comments of Richard Boldt, Maxwell Chibundu, Samir Chopra, Karen Czapanskiy, Martha Ertman, Lisa Fairfax, Susan Freiwald, Jon Garfunkel, James Grimmelmann, Paul Ohm, Frank Pasquale, Ari Schwartz, Rena Steinzor, David Super, Greg Young, and the participants in the University of Chicago Legal Forum’s “Law in a Networked World” symposium greatly improved this Article. Adam Coleman, Alice B. Johnson, and Susan McCarty provided excellent research assistance. Dean Karen Rothenberg and the University of Maryland School of Law generously supported this research. I thank the editors of the University of Chicago Legal Forum for their superb assistance. 1 See, for example, Richard H. Pildes and Cass R. Sunstein, Reinventing the Regulatory State, 62 U Chi L Rev 1, 8 (1995) (while detailing the different approaches taken by the Reagan, H.W. Bush and Clinton presidencies, finding that “[t]he key task for those interested in regulatory performance is to find ways of simultaneously promoting economic and democratic goals”). 2 Roger W. Cobb and Charles D. Elder, Participation in American Politics: The Dynamics of Agenda-Building 164 (Johns Hopkins 2d ed 1983) (explaining that “mass participation may be one of the major innovative forces in developing new issues and refining old issues that have remained on the formal agenda for some time”); Stuart Langton, Citizen Participation in America: Current Reflections on the State of the Art, in Stuart Langton, ed, Citizen Participation in America: Essays on the State of the Art 7 (Lexington 1978) (explaining that “citizen participation has developed as an alternative means of monitoring government agencies”); Jerry L. Mashaw, Due Process in the Administrative State 169 (Yale 1985) (arguing that the dignitary model is both necessary and sufficient to structure a conversation about public values); Roger C. Cramton, The Why, Where, and How of Broadened Public Participation in the Administrative Process, 60 Georgetown L J 525 (1972) (arguing that broadened public participation improves the administrative decisionmaking process, giving decisions greater legitimacy and acceptance); Steven Kelman, Adversary and Cooperationist Institutions for Conflict Resolution in Public Policy-making, 11 J Pol Analysis & Mgmt 178, 180 (1992) (arguing that public participation allows for cooperationist institutions to solve problems among themselves). nature of agency decisions. Agencies' expertise is said to produce rational policies insulated from politics. Little attention has been paid to how information technologies might advance these efforts. To date, the main contribution of digital technologies is e-Rulemaking. Yet e-Rulemaking does little more than re-package the twentieth-century approach to policymaking, which itself has proven problematic. This barely touches information technology's potential for improving the legitimacy of the administrative state. Information systems offer that opportunity. Agencies increasingly transfer crucial responsibilities to computer systems. Computers gather and interpret important data. For example, electronic machines record and calculate votes. Information systems incorporate and apply policy, making decisions about important individual rights, such as a person's ability to receive public benefits. And computers store sensitive information, in- --- 3 See Lawrence Lessig and Cass R. Sunstein, The President and the Administration, 94 Colum L Rev 1 (1994) (arguing that the President should be the primary overseer of agencies within particular limits). 5 The term e-Rulemaking refers to the use of digital technologies to enhance the public's understanding of, and participation in, agency notice-and-comment rulemaking. To that end, the federal government's Regulations.gov website allows the public to search, view, and comment on certain proposed rules. E-Gov Website, E-Rulemaking, available at <http://www.whitehouse.gov/omb/egov/c-3-1-er.html> (last visited Apr 24, 2008) (describing public launch of Regulations.gov website, a "cross agency front-end web application that posts and allows comments on proposed federal agency rules"). Some scholars have embraced e-Rulemaking efforts as a means to democratize agency policymaking. See Beth S. Noveck, The Electronic Revolution in Rulemaking, 53 Emory L J 433, 435–36 (2004) (discussing e-Rulemaking as a way to reform the administrative process). 6 Stuart M. Benjamin, Evaluating E-Rulemaking: Public Participation and Political Institutions, 55 Duke L J 899, 897, 923–29 (2006) (arguing that proponents and skeptics of e-Rulemaking have not considered the role of the courts and Congress in the larger administrative law context and contending that e-Rulemaking efforts will exact high costs with little net benefit). including federal employees' personal data. Because these systems profoundly affect the public, the ability to monitor them is essential to the administrative state's transparency, participatory nature, rationality, and hence its democratic legitimacy. These systems, however, are opaque. Because these systems' software is proprietary, the source code—the programmers' instructions to the computer—is secret. Closed source code leaves users unable to discern how a system operates and protects itself. Thus, users have difficulty detecting programming errors that disenfranchise voters and undercount communities for the census. Programming mistakes that distort established policy routinely remain hidden from view. These systems' opacity interferes with important administrative law values. Closed code prevents public participation in agency decisions incorporated in these systems. Unlike interested members of the public who have opportunities to collaborate in policymaking through comments on proposed rules, stakeholders cannot provide feedback on agency decisions that they cannot see. At the same time, opaque systems impair the administrative state's political accountability. The public cannot hold elected officials responsible for broken systems without opportunities to learn about these systems' problems. Closed sys- --- 10 Wikipedia, *Proprietary Software*, available at <http://en.wikipedia.org/wiki/Proprietary_software> (last visited Apr 24, 2008). Throughout this Article, I will refer to systems whose source code is closed to the public as "closed systems." I also will refer to closed source code as "closed code." The instructions that run computers actually constitute several layers of code. Aviel D. Rubin, *Brave New Ballot: The Battle to Safeguard Democracy in the Age of Electronic Voting* 3 (Morgan Road 2006). Source code provides the basic instructions to the computer. A program known as a compiler converts the source code into object code, a stream of ones and zeros comprehensible only to machines that run inside the computer. 11 Earl Barr, Matt Bishop, and Mark Gondree, *Fixing Federal E-Voting Standards*, Commun of the Assoc for Computing Machinery 19, 21 (Mar 2007) (arguing that open-code systems allow users to locate and repair flaws that would not be repaired under a closed system). tems also undermine an agency's expertise by applying distorted policy and by closing off opportunities for the broader technical community to provide valuable feedback on systems' security and accuracy. This Article proposes opening up these black boxes to improve the quality and democratic legitimacy of agencies' decision-making. My proposal would require vendors to release certain systems' source codes for public review. High profile systems, such as e-voting machines, would command the attention of a wide array of technical experts, while other automated systems would likely be studied by affected interest groups. Thus, an open code model could invigorate the participatory model of the administrative state. In recent years, the cost and delay of involving the public has tempered enthusiasm for participatory approaches to administrative law. This proposal would secure valuable public input while reducing the cost of obtaining it. This proposal should appeal to advocates of strong central executive leadership. Open code will allow politically accountable actors, such as presidents and governors, to oversee agencies more directly. By contrast, closed code leaves those officials dependent on junior subordinates for accounts of what agencies' automated systems are doing and why. At the same time, the input of programmers advances administrative law's goal of marshalling expertise to improve governance. Going back to Judge Landis and Justice Frankfurter, judges and scholars have argued that rational policy is best --- 14 See notes 174–83 and accompanying text discussing technical community's interest in reviewing source code of e-voting systems. 15 See note 184 and accompanying text discussing stakeholders interested in automated systems. 16 This Article uses the term "open code" to refer to software whose source code is available for public review. In using this term, I distinguish open code software from "open source software" or "free software," whose source code is similarly revealed to the public but also enjoys relaxed licensing terms. Lessig, Open Code and Open Societies at 358 (cited in note 12); Jesus M. Gonzalez-Barahona and Gregorio Robles, Libre Software in Europe, in Chris DiBona, Danese Cooper, and Mark Stone, eds, Open Sources 2.0: The Continuing Evolution 161 n 1 (O'Reilly 2006); L. Jean Camp, Varieties of Software and their Implications for Effective Democratic Government, 135 Proceedings of the British Academy 183 (2006), available at <http://papers.ssrn.com/sol3/papers.cfm?abstract_id=905277> (last visited Apr 24, 2008). This Article leaves aside the question of the licensing regime that should govern such software, such as whether the software would be free to use, modify, or sell. achieved through expert scrutiny of difficult problems. This model, however, depends on expert agencies having sufficient data to make optimal decisions. Open code makes new programming and system design expertise relevant and available to the administrative state. This Article proceeds in three parts. Part I provides a typology of closed systems used by administrative agencies. It identifies two serious problems that closed systems conceal: programming errors that cause inaccurate results and security vulnerabilities that can lead to serious problems, such as identity theft and election fraud. Part II articulates the contours of an open code model. It then lays the normative foundations for such a regime, exploring how open code advances critical administrative law values of participation, political accountability, and expertise. Part II argues that this proposal favoring open code would render agency decision-making mechanisms embedded in these systems more transparent, participatory, and expert. Part III discusses three potential objections to an open code model. First, will switching from closed systems to an open code model be unduly costly? This Article argues that short-term costs should be balanced against the long-term gains that transparency brings. Second, will only high profile systems, such as e-voting, generate feedback, leaving the rest of these systems unexamined? This Article answers this question in the negative and explains that openness will provide important benefits even if these systems are not actually reviewed. Third, does an open code regime compromise privacy and security? The computer security literature rejects a "security through obscurity" regime and underscores the importance of openness to identify security --- 20 Vladi Finotto and Angela Forte, Re-Use of Solutions and Open Source Software in Public Administrations, in Eleonora Di Maria and Stefano Micelli, eds, On Line Citizenship: Emerging Technologies for European Cities 140 (Springer 2005) ("By liaising with open source software developer communities, local public administrations can adopt a specific application and contribute to its evolution while enjoying the benefits of full access to a global pool of experts and developers ready to fix problems and suggest solutions.") 21 Naturally, each of these models of administrative law has been subject to criticism. This Article does not address those debates but instead endeavors to show how the varying models of administrative law would support this proposal. vulnerabilities. This Article concludes by offering some refinements to the proposal described in Part II. I. CLOSED CODE IN THE ADMINISTRATIVE STATE Information systems used by agencies bring important benefits to the administrative state. For instance, automated systems cut costs, allowing agencies to manage data efficiently, and they apply policy in a uniform manner. This Part provides a typology of systems whose source code is closed and then explores the problems they raise. A. Typology of Closed Systems Agencies employ closed systems in one of three types. The first type collects and processes information. A prominent type of data processing system is electronic voting machines. After the passage of the Help America Vote Act in 2002, municipalities, counties, and states rushed to buy electronic voting systems that record and tally votes. Private vendors build e-voting systems, incorporating both commercial off-the-shelf software and their own software. Election Systems & Software ("ES&S"), --- 23 This Article does not endeavor to present an exhaustive taxonomy of information systems used by agencies. Instead, it categorizes information systems that have a profound effect on public policy and important individual rights and whose opacity impacts important administrative law values. 24 This Article refers to such systems as “data processing systems.” 27 Rubin, Brave New Ballot at 13 (cited in note 10). 28 Hearing Before the Subcommittee on Elections of the House Committee on House Administration, 110th Cong, 1st Sess 2 (Mar 15, 2007) (testimony of Professor David Wagner), available at <http://www.cs.berkeley.edu/~daw/papers/testimony-house07.pdf> (last visited Mar 6, 2008) (hereinafter Wagner Testimony) (stating that “a voting system vendor like Diebold might license software from Microsoft for use in their touchscreen voting machine”). Those vendors typically do not have permission to provide the source Diebold, Sequoia, and Avante manufacture most of this country’s e-voting systems. E-voting systems use proprietary software. As a result, election officials, candidates, technical experts, and interested citizens typically cannot inspect the source code to ensure the software works correctly. Courts provide trade secret protection to the source code, refusing access to it even in cases where programming errors allegedly caused election irregularities. Another data processing system is the Census Bureau’s Current Population Survey (“CPS”). CPS uses Windows-based software that processes interviews and aggregates census data to determine the amount of federal aid distributed to state and local governments, including housing assistance, public benefits, and code to others. unemployment.33 Census 2000 affected the allocation of over two trillion dollars.34 The second type of automated system executes policy and renders decisions about individuals.35 Programmers building these systems translate policy into code.36 For example, automated public benefits systems suggest eligibility determinations and benefit calculations to case workers.37 Similarly, the Internal Revenue Service uses a decision-making system that identifies individuals who should be subject to tax audits.38 The third type of closed system stores and disseminates sensitive information.39 For instance, data storage systems collect contract data for the Department of Homeland Security.40 State election boards maintain databases of eligible voters.41 State and federal agencies store the sensitive personal information of em- --- 33 Email from Fran Horvath, Office of Employment & Unemployment Statistics, Bureau of Labor Statistics, to Alice B. Johnson, Research Fellow, University of Maryland School of Law (Sept 20, 2007) (on file with the University of Chicago Legal Forum) ("Horvath Email"). The Current Population Survey ("CPS") is collected by the Census Bureau on behalf of the Bureau of Labor Statistics. Id. Once interviews are collected, closed code software known as Blaise processes the information. Id. The Bureau uses these products to aggregate micro data, which is seasonally adjusted. Id. Seasonal adjustments are made by a software program that is open code and available to the public for downloading. Id; US Census Bureau, The X-12-ARIMA Seasonal Adjustment Program, available at <http://www.census.gov/srd/www/x12a/> (last visited Feb 24, 2008); Kenneth Prewitt, The US Decennial Census: Political Questions, Scientific Answers, 26 Population & Dev Rev 1, 5–6 (2000). 34 Prewitt, 26 Population and Dev Rev at 6 (cited in note 33). Given these stakes, it is “no surprise that there is a partisan edge to the focus on census numbers.” US Government Accountability Office, Rep No GAO-06-567, Federal Assistance: Illustrative Simulations of Using Statistical Population Estimates for Reallocating Certain Federal Funding 3–4 (2006) (explaining that Census data determines federal grant programs such as Medicaid, Temporary Assistance for Needy Families, National School Lunch Program, Head Start, transit grants, child support enforcement, state administrative matching for food stamp program, public housing funds, and unemployment insurance). 35 This Article refers to the second type of automated system as “decision-making systems.” 39 This Article refers to the third type as “data storage systems.” ployees and citizens. The Environmental Protection Agency’s data registry collects information about firms’ environmentally-related activities that is then released in an annual report. B. Problems of Closed Systems 1. Inaccuracy. Programming errors in closed systems frequently cause inaccurate findings. Such errors are particularly common in data processing and decision-making systems. As this section explains, software errors can disenfranchise voters, undercount communities for the census, and distort policies in automated public benefits systems. In hundreds of instances, e-voting machines have lost or added votes. In November 2006, e-voting systems in Florida failed to record eighteen thousand ballots in a hotly contested congressional race. During the 2006 primaries, e-voting machines in Cuyahoga County, Ohio made serious errors: “in 72.5 --- 42 See Daniel J. Solove, The Digital Person: Technology and Privacy in the Information Age 182 (NYU 2004); Citron, 80 S Cal L Rev at 295 (cited in note 9). 44 To be sure, the programming errors and security problems discussed in this section occur in both open and closed systems. But these problems are particularly troubling in closed systems as they cannot be easily identified and fixed. percent of the audited machines, the paper trail did not match the digital tally on the memory cards." In 2004, e-voting machines in an Ohio precinct recorded 3,893 votes for President Bush even though only 800 individuals were registered to vote there. In Indiana, e-voting machines counted 144,000 votes in a county that only had 5,352 registered voters. In 2002, Florida's e-voting machines lost as much as 21.5 percent of the votes in certain counties. In 2000, e-voting machines in Iowa recorded four million votes when roughly three hundred ballots were inputted. Local officials caught these errors due to the obvious disparities between the number of votes cast and the number of registered voters. In some cases, official inquiry into these errors led to the discovery of other problems, including a vendor's failure to certify its e-voting machines. But less obvious errors, such as switching votes from one candidate to another, are much more likely to go unnoticed. A July 2007 investigative report re- 47 Thompson, Voting Machines, NY Times Magazine (cited in note 45). 48 John Schwartz, Glitch Found in Ohio Counting, NY Times A12 (Nov 6, 2004). In Franklin County, Ohio, an e-voting system reported that Bush received 4,258 votes against 260 for Kerry in a precinct where only 638 voters had cast ballots. Rubin, Brave New Ballot at 259 (cited in note 10). 52 Moynihan, 64 Pub Admin Rev at 519 (cited in note 45). 53 Certification provides independent verification that voting systems comply with the “functional capabilities, accessibility, and security requirements necessary to ensure the integrity and reliability of voting systems.” Wikipedia, Certification of Voting Machines, available at <http://en.wikipedia.org/wi/Certification_of_voting_machines> (last visited Feb 24, 2008). Under HAVA, the U.S. Election Assistance Commission (“EAC”) bears responsibility for accrediting voting system test laboratories and certifying voting equipment through the Voting System Certification & Laboratory Accreditation Program. Id. Although federal certification for voting machines is voluntary, most states require such certification for their voting systems. See also note 31 discussing the EAC's role in accrediting e-voting systems. vealed that 30 to 40 percent of ES&S's e-voting machines under review changed voters' selections.\textsuperscript{56} Colorado's Secretary of State decertified e-voting systems manufactured by ES&S because tests demonstrated that the machines could not accurately count votes.\textsuperscript{57} Software flaws in e-voting machines raise concerns about the accuracy of other data processing systems. For instance, programming errors in CPS could result in inequitable funding for communities.\textsuperscript{58} Software flaws that cause miscounts would deny local jurisdictions funds from federal programs.\textsuperscript{59} If CPS undercounts the population in a jurisdiction with concentrations of groups requiring federal assistance, members of those groups will be deprived of entitlements that the benefits systems were designed to provide them.\textsuperscript{60} Decision-making systems are also riddled with programming flaws. When computer programmers translate policy into automated public benefits systems, they often distort it.\textsuperscript{61} This is so for several reasons. Although all translations shade meaning, a translation of policy from human language into code poses a more significant risk of radically altering that policy than would a translation from English to another human language. This is in part because artificial languages intelligible to computers have a limited range of key words as compared to human languages. Computer languages thus may be unable to capture a policy's nuances. Code writers interpret policy when they translate it from human language to computer language. Distortions in policy have been attributed to the fact that programmers building code lack "policy knowledge." This is neither surprising nor easily remedied. Private information technology consultants cannot be expected to have specialized expertise in regulatory or public benefits programs. And programmers working for government agencies tend to work on a wide variety of programs, preventing them from developing expertise in any given area. Policy changes may stem from a programmer's values. Programmers can unconsciously phrase a question in a biased manner. In a complex software system composed of smaller subsys- 66 Aus Admin Rev, Automated Assistance at 29 (cited in note 63). 68 Code embeds the values and choices of the code writer. Lawrence Lessig, Code Version 2.0 102 (Basic 2006). 69 See Helen Nissenbaum, How Computer Systems Embody Values, Computer 119 (Mar 2001) (explaining that systems can unfairly discriminate against specific sectors of users); Batya Friedman and Helen Nissenbaum, Bias in Computer Systems, 14 Assoc Computing Machinery Transactions on Info Systems 330, 333 (1996) (describing automated loan program whose system assigns negative value to applicants from certain locations, such as high-crime or low-income neighborhoods). tems, the actual bias of the system "may well be a composite of rules specified by different programmers."\textsuperscript{70} Inaccuracy can spring from a code writer's preference for binary questions that are easily translated into code.\textsuperscript{71} Policy, however, often involves the weighing of multiple variables.\textsuperscript{72} There is a significant risk that code writers may fail to accurately capture these nuances given their bias for binary choices.\textsuperscript{73} Programmers also may inappropriately narrow the discretion available to a system's users.\textsuperscript{74} Distorted policy might also stem from an agency's decision to automate policy changes that require, but have not received, rulemaking procedures. Professor Evelyn Brodkin has studied frontline bureaucratic routines that create new policy at the point of delivery.\textsuperscript{75} For instance, lower-level bureaucrats often make policy when established policy is internally contradictory.\textsuperscript{76} Such practices produce "street-level" welfare policies that have not been published and vetted through notice-and-comment rulemaking procedures.\textsuperscript{77} Decision-making systems could automate such policy.\textsuperscript{78} Whether distorted policy stems from programming errors or deliberate agency action, the resulting inaccuracy is the same. Automated public benefits systems in California, Colorado, Florida, and Texas incorporated distorted policies that changed es- \textsuperscript{70} Grimmelmann, 114 Yale L J at 1737 (cited in note 64). \textsuperscript{71} Denise Kersten, \textit{Bytes vs. Brains}, 37 Government Exec 30 (Sept 1, 2005) (explaining the difficulties that may arise from translating more complex inquiries into code). \textsuperscript{72} For example, the Food Stamp Act and federal regulations limit food stamps of childless adults to three months with six exceptions, which cross reference other exceptions that, in turn, refer to still other exceptions. 7 USC \$ 2015(o) (2000); 7 CFR \$ 273.25 (2008). Those writing code may be tempted to impose a three-month rule without the complicated and arguably confusing exceptions. See David A. Super, \textit{Are Rights Efficient? Challenging the Managerial Critique of Individual Rights}, 93 Cal L Rev 1051, 1096 n 205 (2005) (discussing potential for eligible workers and those designing notices to read three-month rule with regard to childless adults seeking food stamps without regard to the exceptions). \textsuperscript{73} Aus Admin Rev, \textit{Automated Assistance} at 21 (cited in note 63) (stating specific instances in which allowing an agency officer to override an expert system would be preferable). \textsuperscript{74} Id. \textsuperscript{76} Id at 149. \textsuperscript{77} Id. \textsuperscript{78} Automated street-level welfare policy would require notice-and-comment rulemaking to the same extent that non-automated street-level policy would. established rules, often in violation of federal and state law. For instance, code writers embedded over nine hundred incorrect rules into Colorado's Benefits Management System ("CBMS") from September 2004 to April 2007. With one such incorrect rule, CBMS denied Medicaid to breast and cervical cancer patients based on income and asset limits that were not authorized by federal or state law. Another distorted rule caused CBMS to discontinue food stamps to individuals with past drug problems in violation of Colorado law. In all, CBMS rendered hundreds of thousands of erroneous eligibility decisions and benefits calculations. Data processing systems can have serious security problems. In 2007, California's Secretary of State launched an investigation of the state's e-voting systems. Teams of computer scientists found "deep architectural flaws" in the source code of the state's e-voting machines. These flaws rendered the e-voting systems --- 79 See Colorado Benefits Management System, Decision Table Release Notes Covering 2004–2007; Deloitte, CBMS Post-Implementation Review at 10 (cited in note 67) (explaining that there were 175 distinct defects in the Medicaid rules table in 2005). For other incorrect rules encoded in the system, see Colorado Benefits Management System, Decision Table Release Notes for February 24–25, 2007 19 (Feb 26, 2007) (issuing correction of code that exempted a child's earnings in calculating food stamps where the child was the head of the household in contravention of federal regulations); Colorado Benefits Management System, Decision Table Release Notes for March 10–11, 2007 10 (Mar 7, 2007) (fixing rule that improperly imposed income limits on women with breast or cervical cancer in violation of 42 USC § 1396r-1b and Colo Rev Stat Ann § 25.5-5-308). 81 Colorado Benefits Management System, Decision Table Release Notes for February 3–4, 2007 24 (Feb 1, 2007) (correcting rule embedded in system that contravened Colo Rev Stat § 26-2-305, which mandates that individuals "shall not be ineligible [for food stamps] due to a drug conviction unless misuse of food stamp benefits is part of the court findings"). 82 David Migoya, Feds Give Colorado a Big Bill, Denver Post B1 (Apr 12, 2007) (explaining that CBMS made up to 11,000 errors per month). 84 Calandrino et al, Source Code Review at 10–24 (cited in note 83). The voting machines subject to review were manufactured by Diebold Election Systems, Hart InterCivic, Sequoia Voting Systems, and Elections Systems and Software, Inc. Website of vulnerable to attacks and bugs. For instance, the source codes allowed the insertion of malicious code and viruses that would alter votes. Reviewers also found the source codes to be too complex to resist bugs. One vendor incorporated Microsoft's Windows, which is notorious for security problems, in its system. All of the state's e-voting systems used vulnerable encryption schemes, often with critical security codes stored in files as plain text. Based on these findings, California's Secretary of State ordered vendors to fix the systems and has conditionally recertified them pending further review. In December 2007, Colorado's Secretary of State decertified the state's Sequoia e-voting machines due to a variety of security risk factors. Calandrino et al, Source Code Review at 10–24 (cited in note 83). One of the reports explained that creating a voting machine virus would require moderate programming skills and access to voting equipment, both of which are available. Id. Indeed, a Diebold system was recently listed on eBay. Id. Id at 24. Diebold's systems also used C and C++ programming languages, which are known to be prone to security problems. Id at 28–29. Id. Data storage systems also lack adequate security, facilitating the release of sensitive personal information kept by agencies. Consider these data leaks from 2006 and 2007. Attackers broke into the Department of Energy's computer system and stole Social Security numbers of federal employees. Hackers breached the Nebraska Treasurer's system, stealing Social Security numbers and tax identification numbers from nine thousand businesses. The Chicago Voter Database was breached, compromising the Social Security numbers of 1.35 million residents. Attackers invaded the online database of Iowa's Department of Education, exposing sensitive personal data of six hundred individuals. The release of sensitive personal data raises the risk of identity theft and stalking. Current legal mechanisms have not sufficiently addressed the security problems that afflict data storage systems. The E-Government Act of 2002 ("E-Government Act") requires federal administrative agencies to conduct privacy impact assessments ("PIAs") when developing or purchasing systems that collect, store, or disseminate personally identifiable information. Pursuant to Office of Management and Budget ("OMB") guidance, PIAs must identify and evaluate potential threats to privacy, discuss alternatives, identify appropriate risk mitigation measures, and articulate the rationale for the final design choice. The E-Government Act, however, has achieved mixed results to date. The incidence of agency noncompliance is significant: 12 percent of agencies do not have written processes or policies --- 93 Id. 94 Id. 95 Id. For instance, independent contractor Unisys Corporation built and managed the information technology networks for the Transportation Security Administration and the DHS headquarters. Nakashima and Krebs, Contractor Faulted, Wash Post (cited in note 40). The closed nature of the system prevented the agency and the public from overseeing the system, which was subject to three months of cyber-intrusions by hackers. It allowed Unisys to falsely certify that the network had been protected to cover up its lax oversight. Id. 96 Citron, 80 S Cal L Rev at 251–52 (cited in note 9). 100 Kenneth A. Bamberger and Deirdre K. Mulligan, Privacy Decisionmaking in Administrative Agencies, 75 U Chi L Rev 75, 76, 81–82, 83 (2008) (arguing that PIA process requirement is insufficient to address privacy concerns). for all listed aspects of PIAs and 16 percent of systems covered by the PIA requirement did not have a complete or current PIA. As Kenneth Bamberger and Deirdre Mulligan have forcefully argued, the E-Government Act may have little chance of future success in part due to the public's inability to comment on the design of systems whose specifications and source codes remain obscured. An open code solution would tackle this problem. The next Part suggests opening up these systems and explores why administrative law values support this proposal. II. ENHANCING THE DEMOCRATIC AND EXPERT NATURE OF ADMINISTRATIVE GOVERNANCE WITH OPEN CODE Closed code inhibits public participation in the development of critical information systems. Because the technical community has no opportunity to identify a system's problems, an uninformed public cannot press politically accountable actors to remedy them. With closed code, the expertise of a broader technical community is unavailable to agencies. An open code model has the potential to redress these problems. This Part begins by developing that model. Then, it demonstrates how open code governance can advance the transparency, democratic legitimacy, and expertise of the administrative state. A. Open Code Proposal The source code of critical information systems should be open to the public. Open code would reveal how a system works, shedding light on the policies encoded in it. It would allow interested parties to discuss the assumptions that underlie 101 Id at 81. 102 Id at 88–89. 103 See, for example, Lawrence Lessig, The Limits in Open Code: Regulatory Standards and the Future of the Net, 14 Berkeley Tech L J 759, 764 (1999) (arguing that open code decreases the opportunity for government regulation of code). the digital processes. And open code would permit inspection of a system's security features. This proposal does not insist that agencies eliminate private vendors and generate the code themselves, either by relying on volunteer programmers or on government information technology departments. Instead, vendors constructing these systems would be required to release the source code to the public before their purchase or implementation. Just as procurement contracts insist that government contractors refrain from discriminatory practices, agencies could require that vendors make transparent the source code for critical systems to facilitate public feedback and executive oversight. Computer security expert Bruce Schneier explains that systems built by private vendors whose source codes are opened to the public offer both safety and reliability. An open code model could be pursued in various ways. Agencies could insist on open code systems. Vendors would be required to release to the public a system's specifications and source code during the bidding process and before a purchased system goes live. To that end, the OMB could issue a circular conditioning the provision of federal funding for technology purchases on the use of open code. A state budget office could do the same for local purchases receiving state aid. For example, the San Francisco Elections Commission ("Commission") has issued a non-binding appeal to California's Department of Elections to "make reasonable efforts to select and --- 105 Camp, 135 Proceedings of the British Academy (cited in note 16). Programmers should provide comments that explain why they wrote the code they way that they did and exactly how they did it. See Posting of Rebecca Buckman, Men Write Code From Mars, Women Write More Helpful Code From Venus, Wall St J Blog (June 6, 2008), available at <http://blogs.wsj.com/biztech/2008/06/06/men-write-code-from-mars-women-write-more-helpful-code-from-venus/> (last visited June 30, 2008). The code would then become a roadmap for others who want to understand the policies embedded in it. Id. Emma McGrattan, one of Silicon Valley's highest-ranking programmers, has instituted new coding standards at Ingres, where she is a senior vice-president of engineering, which requires programmers to include a detailed set of comments before each block of code explaining what the piece of code does and why and a detailed history of any changes programmers make to the code. Id. I thank James Grimmelmann for this helpful point. 106 Wagner Testimony at 3 (cited in note 28). 108 But see Yochai Benkler, Freedom in the Commons: Towards a Political Economy of Information, 52 Duke L J 1245, 1275 (2003) (advocating that software written for government should be released as "free software" under relaxed licensing regime to enhance the commons approach of software development). 109 I thank my colleague David Super for this insight. use voting systems technology, including hardware and software that at a minimum is publicly disclosed." The Commission defined "public disclosure" as the right to inspect, test, and comment on technology during the procurement process. Thus, if adopted, this policy would require prospective vendors to release their source codes during the bidding process. Alternatively, legislators could mandate open code systems. For instance, eighteen countries require the use of open source software in government offices. In 2006, the California legislature held hearings on whether its electoral system should use open source software. The next sections provide normative support for the use of open code software, relying on different models of the administrative state. B. Participation Enhanced Open code systems secure meaningful opportunities for public input, advancing the participatory model of administrative law. This model promotes collaboration between the public and agencies in setting and achieving policy goals. Although the value of public participation varies depending on the context, it is viewed as generating better information for agency deliberation. 111 Id. 113 Id at 60 (explaining that national legislatures of Belgium, Brazil, Bulgaria, Chile, Colombia, Costa Rica, France, Italy, Peru, Spain, and Ukraine require use of open-source software in government offices). 115 This Article uses the term "participatory model" to refer to a constellation of theories of regulatory governance that envision regulation as the product of collective deliberation about regulatory goals and priorities. See, for example, Orly Lobel, The Renew Deal: The Fall of Regulation and the Rise of Governance in Contemporary Legal Thought, 89 Minn L Rev 342, 377 (2004); Steven P. Croley, Theories of Regulation: Incorporating the Administrative Process, 98 Colum L Rev 1, 76 (1998). See also Cass R. Sunstein, After the Rights Revolution: Reconceiving the Regulatory State (Harvard 1990) (viewing governmental process as deliberation oriented to public good rather than series of interest-group tradeoffs); Gerald E. Frug, Administrative Democracy, in David H. Rosenbloom and Richard D. Schwartz, eds, Handbook of Regulation and Administrative Law 519, 520 (Marcel Dekker 1994). This model envisions participation as enhancing an agency’s legitimacy by cultivating the public’s sense that it is involved in, and bears responsibility for, government. In addition, participation is understood as offsetting the influence of well-organized interest groups through the inclusion of traditionally unrepresented interests. An open code model creates new opportunities for diverse groups to participate in the automated administrative state. Networked technologies certainly make public participation easier and cheaper. Digital networks facilitate peer production, a process by which individuals, whose actions are not coordinated either by managers or by market price signals, jointly produce information. Peer production facilitates collaboration among “radically diverse” groups. According to Yochai Benkler’s social production theory, our networked information environment has produced a popular culture that encourages active participation. --- 118 Rossi, 92 Nw U L Rev at 211 (cited in note 17). 119 See Russell J. Dalton, *The Good Citizen: How a Younger Generation is Reshaping Politics* 170 (CQ 2008) (explaining that younger Americans, such as members of Generation X and the Millennials, seem likely to seize upon new, networked opportunities for public participation). Political science research reveals that newer generations tend to connect with government through online public interest groups and internet discussion forums. Id at 75. This proposal would tap into these peer-to-peer networks and enhance the legitimacy of the administrative state. 120 See Jonathan Zittrain, *The Future of the Internet—And How to Stop It* 92 (Yale 2008) (explaining that the generative Internet and PC make political and artistic expression easier). in matters of public policy. Such “commons-based” participation arguably deepens the legitimacy of government action. Consider the online communities that exposed an e-voting system’s flaws in 2003. Early that year, activist Bev Harris found Diebold’s source code on the company’s website. Harris posted the source code on her website, urging viewers to examine and distribute it to file-sharing networks. Internet discussion forums avidly discussed the source code’s technical imperfections. Computer scientists from Johns Hopkins and Rice University reviewed the source code, posting their criticism on the internet. A few months later, a hacker sent Harris a cache of internal Diebold emails that demonstrated the company knew that certain of its e-voting systems had problems. After Harris posted the emails on her website, college students widely distributed them to peer-to-peer networks to keep the issue before the public. In late 2003, California’s Voting Systems Panel (“Panel”) launched an investigation into Diebold’s e-voting machines. The Panel subsequently removed certain of the company’s e-voting machines from the state’s voting precincts. As the Diebold example suggests, revealing the source codes to the public would allow individuals and groups to study the accuracy and security of these systems. For example, online 124 Id. 125 Rubin, Brave New Ballot at 32 (cited in note 10). 127 Harris, Black Box Voting at 104, 140–47 (cited in note 30). 128 Benkler, The Wealth of Networks at 227 (cited in note 122). See Rubin, Brave New Ballot (cited in note 10) (describing his role in exposing weaknesses in Diebold source that Bev Harris discovered). Computer scientists found that a hacker could program a voter card to let it cast as many votes as the hacker liked. Thompson, Voting Machines, NY Times Magazine (cited in note 45). 130 Id at 230. 131 Id. 132 Id at 231. communities could evaluate a system's design for hidden biases.\textsuperscript{134} Programmers recruited by public interest groups could check the policies embedded in automated decision-making systems like CBMS. They could provide feedback on the privacy and security risks posed by proposed systems.\textsuperscript{135} This feedback would exert pressure on agencies to fix problems at the margins that they might be inclined to ignore. Such participation could enhance the public's perception of these systems.\textsuperscript{136} Indeed, the Netherlands has focused its e-Government initiative on the adoption of open source software for the accuracy and legitimacy it brings.\textsuperscript{137} The public's participation could potentially combat interest-group capture of agencies and cronyism.\textsuperscript{138} Open code could illuminate agency decisions that advance the interests of powerful groups.\textsuperscript{139} For instance, if California's Department of Elections insists that vendors disclose their source codes during the bidding process, the technical community would have the opportunity to expose flaws in e-voting systems before election boards sign procurement contracts.\textsuperscript{140} Such feedback might inhibit an \textsuperscript{134} Lessig, Code Version 2.0 at 102 (cited in note 68) (arguing that members of the technical community now have power to restructure norms); Nissenbaum, How Computer Systems Embody Values, Computer at 119 (cited in note 69) (noting that engineers now face the challenge of building systems with certain moral properties). \textsuperscript{135} See Bamberger and Mulligan, 75 U Chi L Rev at 89 (cited in note 100) (explaining that because the PIA and other public documentation of e-Passport program did not provide the exact specifications of the system under consideration, the public could not review and test the proposed system). Professors Bamberger and Mulligan explain that the E-Government Act lacks explicit mechanisms for public participation in the PIA process, thus limiting opportunities for outside experts to assist the agency in identifying the privacy implications of complex data storage systems. Id. at 87. Although no formal process for public participation is provided under the E-Government Act, this proposal would enable outside groups and technicians to provide agencies with informal feedback on the security features and privacy problems posed by proposed systems. \textsuperscript{138} The central concern here is that well-organized groups exercise disproportionate influence over agency policymaking. Stewart, 88 Harv L Rev at 1684–1687 (cited in note 117). Scholars have argued that administrative law ought to promote deliberative rationality and to constrain the influence of special interest groups. Sunstein, 72 Va L Rev at 271–96 (cited in note 116). \textsuperscript{139} Public choice theory contends that administrative regulation is little more than private contracts that benefit interest groups at the public expense. Jerry L. Mashaw, Greed, Chaos, and Governance: Using Public Choice to Improve Public Law 23–29 (Yale 1997). \textsuperscript{140} See text accompanying notes 104–05 discussing San Francisco Elections Commis- agency's inclination to pick vendors based on political connections.\textsuperscript{141} Open code thus has the potential to address concerns that special interests might dominate the procurement process.\textsuperscript{142} The drafters of the Administrative Procedure Act aimed to establish a system in which "citizens and representatives, operating through responsive but expert organs, would make deliberative decisions."\textsuperscript{143} Scholars lament that these democratic aspirations have not been realized.\textsuperscript{144} Public participation has withered in part due to the complexity of regulatory issues, the power of interest groups, and the expense of participation.\textsuperscript{145} Closed systems make this problem worse. Open code, however, could reverse this trend. It could also facilitate the participation of individuals who previously had little connection with the administrative state. As the next section discusses, informed citizens could pressure elected officials to ensure the accuracy and security of critical automated systems, amplifying their officials’ political accountability. C. Political Accountability Facilitated This proposal should also appeal to supporters of a strong executive model of administrative law. This model views presidential and gubernatorial influence over agency action as enhancing the administrative state’s accountability by creating an “electoral link between the public and the bureaucracy.”¹⁴⁶ Presidents and governors concern themselves with an agency’s effectiveness because the public holds chief executive officers responsible for governmental performance.¹⁴⁷ Thus, executive officers and their senior staff work to ensure that agencies achieve their “objectives, without undue cost, in an expeditious and coherent manner” to ensure reelection.¹⁴⁸ The model contends that presidential administrations would be more likely to consider the preferences of the general public, rather than just parochial interests.¹⁴⁹ This Article’s proposal closes the information gap between a system’s designers and the public, allowing the public to formulate more focused, informed complaints about a troubled system and to present those complaints to chief executive officers.¹⁵⁰ Senior executive staff could then respond to the public’s specific concerns. The specificity of the public’s complaints would make it harder for agencies to ignore them. At the same time, an open code approach would make it easier to hold an agency accountable for its response to such complaints. --- ¹⁴⁶ Sunstein, Free Markets at 322 (cited in note 143). ¹⁴⁸ Id. Advocates of this view argue it is equally applicable to executives whose desire for reelection is strong and to those who cannot serve again given their interest in their historical legacy. This view notes that the accountability point should not be overstated. The resolution of any particular regulatory issue plays a small role in the public’s perception of presidential performance. See id. ¹⁵⁰ This proposal would facilitate the transparency necessary for the operation of this model. Unlike software whose accuracy is unmistakably clear from its operation, such as life-critical systems like aircraft software, problems in closed code often remain hidden. In many instances, it may not be clear to the public that a problem even exists that needs correction. Colorado's experience with CBMS demonstrates the point. In response to both a lawsuit filed by public interest groups about the failure of CBMS, and media coverage of the issue, Colorado's Governor created a new agency position charged with fixing CBMS.\(^\text{151}\) Similarly, in 2007, California's Secretary of State launched an investigation of the state's e-voting systems after public interest groups expressed concerns about voter disenfranchisement.\(^\text{152}\) The next section explores how open code model would protect and amplify the expertise of agency decision-making. D. Expertise Advanced The technical community's input would advance the expertise model of administrative law, which emphasizes an agency's role in bringing specialized knowledge into the political domain.\(^\text{153}\) An agency's expertise allows it to communicate with substantive experts, identify better experts, and assess which insights can be turned into workable administrative practices.\(^\text{154}\) Agencies have the capacity to bring together specialized personnel and data, facilitating comprehensive analysis that generalist legislatures cannot match.\(^\text{155}\) This model depends upon agencies having the necessary expertise and information available to it.\(^\text{156}\) The input of interested programmers could advance agency expertise in two critical ways. First, programmers could ensure that programming mistakes do not defeat an agency's own expertise. For instance, technicians working with public interest firms \(^\text{151}\) Bill Scanlon, Benefits System Director Named, Denver Rocky Mtn News 28A (May 28, 2005). \(^\text{152}\) See text accompanying notes 83–91. \(^\text{155}\) See Mashaw, Due Process at 19 (cited in note 2) (explaining that the creation of prominent administrative agencies emerged as a result of the need for more specialized expertise); Breyer, Breaking the Vicious Circle at 73–74 (cited in note 154). \(^\text{156}\) Mashaw, Bureaucratic Justice at 50 (cited in note 116) (explaining that the ideal of instrumental rationality in the context of particular administrative programs depends on a variety of conditions including whether administrators have all of the facts that are relevant to decisionmaking). could catch programming errors that alter established policy in systems such as CBMS.\textsuperscript{157} That feedback would allow an agency to insist that its vendor fix the code to reflect the agency’s own policy choices. Second, the technical community would provide agencies with crucial data to make optimal decisions. The expertise model extols agencies for their “capacity to bring together information on the beneficial and detrimental aspects of regulatory alternatives.”\textsuperscript{158} Closed systems prevent agencies from fulfilling that role. Open code would allow agencies to leverage the expertise of a broad technical community in making procurement decisions and in reviewing systems.\textsuperscript{159} This proposal would provide an inexpensive means to enhance the expertise of agency decision-making. Such expert input is particularly important for agencies that do not have access to such expertise either in-house or through outside advisors.\textsuperscript{160} For instance, election officials currently lack sufficient information to conduct rigorous reviews of e-voting systems.\textsuperscript{161} Election officials do not know enough about how the machines operate to assess them.\textsuperscript{162} As the elections supervisor of Florida’s Leon County explained: vendors control all of the information about their e-voting machines and will not “tell me that [ ] buggy software is why I can’t get the right time on [the machines’] audit logs.”\textsuperscript{163} If the systems’ vendors made the source codes public, computer scientists and academics could help local and state election officials in checking these systems.\textsuperscript{164} In other cases, the technical community could assess data storage sys- \textsuperscript{157} See Citron, 85 Wash U L Rev (cited in note 8). \textsuperscript{158} McGarity, Reinventing Rationality at 114 (cited in note 4). \textsuperscript{160} But See Bamberger and Mulligan, 75 U Chi L Rev at 100 (cited in note 100) (attributing success of Chief Privacy Officer of Department of Homeland Security Nuala O’Connor Kelly to, in part, her ability to build a staff with varied privacy training and expertise who actively participated in privacy associations and conferences). \textsuperscript{161} Rubin, Brave New Ballot at 24 (cited in note 10). \textsuperscript{162} Thompson, Voting Machines, NY Times Magazine (cited in note 45). \textsuperscript{163} Id. \textsuperscript{164} Wagner Testimony at 4 (cited in note 28). tems for security vulnerabilities. Programmers could inspect systems to ensure that they comply with privacy laws. In short, the technical community's feedback would promote an agency's expertness. III. OBJECTIONS TO AN OPEN CODE MODEL This proposal, of course, is not free from serious objections. This Part evaluates three central concerns about an open code model and concludes that this proposal deserves adoption. First, this proposal may face implementation and cost constraints. Agencies may be unable to insist that their vendors reveal the source code under current contract terms. In that case, the cost of switching systems would be a serious concern. A new system may require investments in equipment and staff training. For instance, the Census Bureau recently dedicated significant resources implementing CPS that it would not want to repeat. Vendors also may raise their systems' cost if forced to reveal their source codes. A switch, however, has the potential to reduce long-term costs, especially for troubled systems, such as e-voting machines and automated public benefits systems, which require substantial resources to fix. Over the past three years, Colorado has spent millions of dollars working on CBMS, which continues to be plagued by problems. Texas's adoption of a flawed auto- 165 Of course not all security leaks relate to a system's flaws. Some are attributed to human error like the Veterans Administration employee who took home a laptop containing millions of SSNs of veterans and the laptop was stolen. See David Stout, Veterans Agency to Atone with Free Credit Monitoring, NY Times A22 (June 22, 2006). 166 Berry and Moss, 11 Info Polity at 30 (cited in note 104) (noting alternatives to available products that store user information in a more covert manner). See generally Solove, The Digital Person at 68-71 (cited in note 42). 167 Christopher F. Edley, Jr., Administrative Law: Rethinking Judicial Control of Bureaucracy 22 (Yale 1990) (arguing that outside participation increases agency expertise by giving people affected by administrative rules the opportunity to be heard and by negating the tendency of agencies to exercise power in an arbitrary way). 168 Lee, 9 Vand J Enter & Tech L at 73 (cited in note 112) (explaining that the costs of switching to a new system may be too high for governments to have an incentive to adopt an open code system). 169 Horvath Email (cited in note 33) (explaining that "[h]aving just undergone the lengthy and difficult behind-the-scenes conversion to Blaise, we are unlikely to [move towards open source software and] repeat that process in the foreseeable future"). 170 Jerd Smith, Audit: Costly Errors in Computer System for Benefits Had High Mistake Rate, Denver Rocky Mtn News 4A (Apr 19, 2006) (explaining that errors in computing system may cost Colorado as much as $10 million); Bill Scanlon, Millions Spent on Welfare Fix, Denver Rocky Mtn News 6A (Sept 3, 2005) (explaining that CBMS is "clumsy to use, has great trouble generating reports, requires users to work around kinks and makes mistakes issuing benefits"); Scanlon, Benefits System Director Named at 28A (cited mated public benefits system similarly wasted hundreds of millions of dollars, eventually requiring the state to replace its initial vendor with another firm.\textsuperscript{171} Open code would allow agencies and their vendors to enjoy feedback about a system's accuracy and security from programmers whose services are virtually free. Significantly, the benefits of a more transparent and legitimate system should not be undervalued. Open code would provide opportunities for public participation, political accountability, and expertise that are now absent. It might prevent the disenfranchisement of voters and ensure greater accuracy in decision-making systems. Agencies and legislatures should consider the short-term costs of a new system with the long-term savings of a more accurate, secure, and legitimate open system. Critics may argue that vendors will refuse to build open systems that reveal their trade secrets. They may suggest that vendors will wait to see who moves first so they can free-ride on another's investments in research and development, resulting in stasis.\textsuperscript{172} A first-mover problem, however, may be illusory for two reasons. First, the high price tag of procurement contracts strongly suggests that vendors will design these systems. Because the government is the sole buyer in these markets, vendors will meet its conditions rather than dropping out of the market altogether. Indeed, in January 2008, Diebold spokesman Chris Riggall noted that "the company is considering making the software open source on its next generation of touch-screen machines" due to growing pressure from states.\textsuperscript{173} As Riggall explains: "if the expectations of our customers change, we'll have to respond to that reality."\textsuperscript{174} Second, vendors already have embraced the open code model given its potential for lucrative contracts. For instance, Open Voting Solutions, an e-voting machine vendor whose source code would be publicly available, has submitted proposals to boards of elections in New York.\textsuperscript{175} --- \textsuperscript{172} Cindy Cohn of the Electronic Frontier Foundation raised this issue at the "Law in a Networked World" symposium. \textsuperscript{173} Thompson, \textit{Voting Machines}, NY Times Magazine (cited in note 45). \textsuperscript{174} Id. If a first-mover disadvantage does arise, then states could band together in a consortium to purchase systems, splitting the costs of research and development. A first-move disadvantage supports the OMB's involvement in this issue. The OMB could help coordinate purchasing or have federal agencies purchase en masse for all of the states that participate in programs that they run.\textsuperscript{176} Between the software the OMB buys directly and that it funds, it dominates the market. Given the important public policy concerns at stake, the government has every right to use its market power to ensure that products meeting its specifications are available. The second objection involves skepticism about whether this Article's proposal would generate the benefits that it promises. Some may question whether a broader technical audience would, in fact, review the source code of certain systems.\textsuperscript{177} The typical open source project only has a small number of contributors.\textsuperscript{178} That surely would not be true of high-profile systems, such as e-voting machines. As Professor Wagner has explained, and as past practice makes clear, open code e-voting systems would attract "the country's best independent technical experts to analyze the source code and publish their findings."\textsuperscript{179} Such projects generate interest due to the reputational advantages of participating in such projects.\textsuperscript{180} Consider Australia's open code e-voting project. A private company designed Australia's e-voting system and posted all of the drafts of its source code online for review and criticism.\textsuperscript{181} Interested programmers and independent auditors studied the source code and provided feedback.\textsuperscript{182} An Australian National University professor caught the most serious problem.\textsuperscript{183} The vendor, in turn, fixed the source code, shoring up the system's \textsuperscript{176} It is naturally true that states need a great deal of customization for systems depending upon how they administer a program and what policies they have selected for those programs. \textsuperscript{177} Jason Kitcat, \textit{Source Availability and E-Voting: An Advocate Recants}, Commun of the Assoc for Computing Machinery 65, 66 (Oct 2004) (arguing that the more likely scenario is that the majority of open code would be ignored by the broader audience). Paul Ohm raised this concern at the "Law in a Networked World" symposium. \textsuperscript{178} Kitcat, \textit{Source Availability} at 66 (cited in note 177). \textsuperscript{179} Wagner Testimony at 4 (cited in note 28). \textsuperscript{180} Sunstein, \textit{Infotopia} at 148 (cited in note 116). \textsuperscript{181} Moynihan, 64 Pub Admin Rev at 524 (cited in note 45). \textsuperscript{182} Id. \textsuperscript{183} Id. security.\textsuperscript{184} Australia’s e-voting system has received broad praise for its reliability and security.\textsuperscript{185} Similarly, computer scientists working for the Open Voting Consortium have begun programming open source software for election systems in the United States.\textsuperscript{186} Systems affecting interest groups also would receive attention. One might imagine that public interest groups would direct significant energies to ensuring the accuracy of automated decision systems such as CBMS. Programmers might also review the source code for public benefits systems due to a sense that they are part of a meaningful social project.\textsuperscript{187} Although the chances of review are reduced for low-profile systems, the possibility is never completely absent or predictable. Indeed, computer security academics might ask students to assess such systems. Even if the source code of systems is not actually studied, important benefits remain. Those who believe that their work will be reviewed are more careful.\textsuperscript{188} Due to the reputational costs of sloppy work, source code disclosure gives vendors a powerful incentive to ensure that their code is free of problems.\textsuperscript{189} Thus, the open code model may inspire vendors to more thoroughly check the code’s accuracy and security even for obscure programs. The third objection concerns the security of open code systems. Software manufacturers argue that open code would enhance a system’s vulnerability.\textsuperscript{190} The computer security literature, however, rejects the notion that secrecy ensures a system’s \textsuperscript{184} Id. \textsuperscript{187} Sunstein, \textit{Infotopia} at 160–62 (cited in note 116). \textsuperscript{188} As Jeremy Bentham initially observed, the fear of observation results in increased obedience and discipline. Solove, \textit{The Digital Person} at 98 (cited in note 42); Michel Foucault, \textit{Discipline and Punish: The Birth of the Prison} 200 (Pantheon 1977) (Alan Sheridan, trans). By contrast, vendors who keep their source code secret are more likely to be sloppy. Schneier, \textit{Secrets and Lies} at 344 (cited in note 107). \textsuperscript{189} Wagner Testimony at 5 (cited in note 28). \textsuperscript{190} Daniel P. Tokaji, \textit{The Paperless Chase: Electronic Voting and Democratic Values}, 73 Fordham L Rev 1711, 1794 (2005) (explaining the pros and cons of the “security through obscurity” approach taken by software users in restricting access to code). safety.\textsuperscript{191} This literature explains that security is not achieved by concealing security defects, but instead by allowing interested programmers to identify flaws that need to be fixed.\textsuperscript{192} Open code enlarges the available pool of intelligence, enabling a community of testers to identify bugs and problems with the code.\textsuperscript{193} Because it is more likely that flaws will be discovered if the source code is available for inspection, computer scientists advocate open e-voting systems.\textsuperscript{194} The only security measures that must remain secret are a system's changeable secrets, such as its passwords and cryptographic keys.\textsuperscript{195} At the same time, revealing the source code incurs only a low-level of risk.\textsuperscript{196} Unlike a warring nation that learns much from discovering an enemy's military plans, computer attackers learn little from the disclosure of a system's source code.\textsuperscript{197} This is because computer security measures, such as firewalls, have a low level of uniqueness.\textsuperscript{198} As a result, attackers can find a system's flaws without the source code.\textsuperscript{199} Studies demonstrate that open source software provides better security than proprietary software.\textsuperscript{200} For this reason, agencies with salient security requirements, such as the Department \textsuperscript{192} Schneier, \textit{Applied Cryptography} (cited in note 191). \textsuperscript{193} Eric S. Raymond, \textit{The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary} 19 (O'Reilly rev ed 2001) ("Given enough eyeballs, all bugs are shallow."); Wagner Testimony at 4 (cited in note 28); Peter P. Swire, \textit{A Model for When Disclosure Helps Security: What Is Different About Computer and Network Security?}, 3 J Telecommun & High Tech L 163, 169 (2005) (arguing that multiple users are more likely to identify and correct flaws in the code). A corollary point is that the soundness of a decision grows the more diverse the minds inspecting it. For a general discussion, see Sunstein, \textit{Infotopia} (cited in note 116). \textsuperscript{195} Schneier, \textit{Secrets and Lies} at 344 (cited in note 107). \textsuperscript{196} Swire, 3 J Telecommun & High Tech L at 168 (cited in note 193). \textsuperscript{197} Id at 168. \textsuperscript{198} Id. \textsuperscript{199} Id. \textsuperscript{200} Jaap-Henk Hoepman and Bart Jacobs, \textit{Increased Security Through Open Source}, Commun of the Assoc for Computing Machinery 79, 81 (Jan 2007) ("We believe that open source software is a necessary requirement to build systems that are more secure."). of Defense and the National Security Agency, have adopted Linux operating systems.\(^{201}\) The Departments of Veterans Affairs, Defense, and Health and Human Services employ open source software to maintain patient health records.\(^{202}\) California’s Air Resource Board runs 65 percent of its databases on open source software for the security that it offers.\(^{203}\) This proposal, however, has its limits. It should not apply when the importance of secrecy outweighs the transparency, democratic legitimacy, and expertise open code brings. The exceptions to the Freedom of Information Act’s (“FOIA”) disclosure requirements provide insight into situations where public policy concerns might support a closed code regime.\(^{204}\) Consider these examples. FOIA excludes information compiled by law enforcement from public disclosure if producing such information would reveal “techniques and procedures for law enforcement investigations.”\(^{205}\) The IRS’s auditing software might qualify as code that should remain closed in order to prevent individuals from gaming the system. The “No Fly” data matching program seemingly falls within FOIA’s exemption from disclosure information that would “endanger the life or physical safety of any individual.”\(^{206}\) Its source code should not be opened on the grounds that terrorists could evade detection if they knew the system’s logic.\(^{207}\) \(^{204}\) 5 USC § 552(b)(1)–(9) (2000 & Supp 2004). So too would exceptions to state open-record laws. \(^{205}\) 5 USC § 552(b)(7)(E). \(^{206}\) Id. \(^{207}\) See Citron, 85 Wash U L Rev at 1286 (cited in note 8) (arguing that algorithms of To that end, this Article’s proposal should provide a presumption of open code that could be rebutted by other important public policy concerns. Evidence of such public policy concerns, however, should be carefully reviewed. The administrative law values that an open code regime secures should not be forsaken without clear justification. CONCLUSION Critics of the administrative state are troubled by its opacity and lack of democratic pedigree. Agencies' closed information systems exacerbate these concerns. This Article argues that opening up the source code of these systems can combat these problems by illuminating agency decisions bound up in these systems. An open code model would secure the participation of a technical community that has previously played no role in the administrative state. And more importantly, this proposal would enhance the political accountability and expertise of agency decision-making. “No Fly” program should be subjected to review of the Independent Advisory Board to ensure due process protections).
{"Source-Url": "https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=1430&context=uclf", "len_cl100k_base": 15196, "olmocr-version": "0.1.49", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 87124, "total-output-tokens": 25292, "length": "2e13", "weborganizer": {"__label__adult": 0.0010786056518554688, "__label__art_design": 0.0025234222412109375, "__label__crime_law": 0.11895751953125, "__label__education_jobs": 0.04437255859375, "__label__entertainment": 0.0007691383361816406, "__label__fashion_beauty": 0.0006771087646484375, "__label__finance_business": 0.0098114013671875, "__label__food_dining": 0.0008006095886230469, "__label__games": 0.0044708251953125, "__label__hardware": 0.002166748046875, "__label__health": 0.001900672912597656, "__label__history": 0.003406524658203125, "__label__home_hobbies": 0.0005574226379394531, "__label__industrial": 0.0012054443359375, "__label__literature": 0.0027790069580078125, "__label__politics": 0.158447265625, "__label__religion": 0.00147247314453125, "__label__science_tech": 0.1173095703125, "__label__social_life": 0.0014829635620117188, "__label__software": 0.139892578125, "__label__software_dev": 0.38330078125, "__label__sports_fitness": 0.000621795654296875, "__label__transportation": 0.0015115737915039062, "__label__travel": 0.0005254745483398438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 98240, 0.05795]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 98240, 0.25476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 98240, 0.88898]], "google_gemma-3-12b-it_contains_pii": [[0, 626, false], [626, 3512, null], [3512, 6999, null], [6999, 10271, null], [10271, 13275, null], [13275, 16115, null], [16115, 19257, null], [19257, 21827, null], [21827, 25395, null], [25395, 28684, null], [28684, 32178, null], [32178, 33258, null], [33258, 36518, null], [36518, 39828, null], [39828, 43257, null], [43257, 46853, null], [46853, 49735, null], [49735, 52017, null], [52017, 55050, null], [55050, 58067, null], [58067, 61653, null], [61653, 64507, null], [64507, 68248, null], [68248, 69282, null], [69282, 72054, null], [72054, 75000, null], [75000, 78094, null], [78094, 81266, null], [81266, 84041, null], [84041, 86912, null], [86912, 90117, null], [90117, 93863, null], [93863, 97194, null], [97194, 98240, null], [98240, 98240, null]], "google_gemma-3-12b-it_is_public_document": [[0, 626, true], [626, 3512, null], [3512, 6999, null], [6999, 10271, null], [10271, 13275, null], [13275, 16115, null], [16115, 19257, null], [19257, 21827, null], [21827, 25395, null], [25395, 28684, null], [28684, 32178, null], [32178, 33258, null], [33258, 36518, null], [36518, 39828, null], [39828, 43257, null], [43257, 46853, null], [46853, 49735, null], [49735, 52017, null], [52017, 55050, null], [55050, 58067, null], [58067, 61653, null], [61653, 64507, null], [64507, 68248, null], [68248, 69282, null], [69282, 72054, null], [72054, 75000, null], [75000, 78094, null], [78094, 81266, null], [81266, 84041, null], [84041, 86912, null], [86912, 90117, null], [90117, 93863, null], [93863, 97194, null], [97194, 98240, null], [98240, 98240, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 98240, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 98240, null]], "pdf_page_numbers": [[0, 626, 1], [626, 3512, 2], [3512, 6999, 3], [6999, 10271, 4], [10271, 13275, 5], [13275, 16115, 6], [16115, 19257, 7], [19257, 21827, 8], [21827, 25395, 9], [25395, 28684, 10], [28684, 32178, 11], [32178, 33258, 12], [33258, 36518, 13], [36518, 39828, 14], [39828, 43257, 15], [43257, 46853, 16], [46853, 49735, 17], [49735, 52017, 18], [52017, 55050, 19], [55050, 58067, 20], [58067, 61653, 21], [61653, 64507, 22], [64507, 68248, 23], [68248, 69282, 24], [69282, 72054, 25], [72054, 75000, 26], [75000, 78094, 27], [78094, 81266, 28], [81266, 84041, 29], [84041, 86912, 30], [86912, 90117, 31], [90117, 93863, 32], [93863, 97194, 33], [97194, 98240, 34], [98240, 98240, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 98240, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
de0e3e09695e931864b3e17507b85534c17cbacc
Introducing Apache Mahout Scalable, commercial-friendly machine learning for building intelligent applications Grant Ingersoll September 08, 2009 Once the exclusive domain of academics and corporations with large research budgets, intelligent applications that learn from data and user input are becoming more common. The need for machine-learning techniques like clustering, collaborative filtering, and categorization has never been greater, be it for finding commonalities among large groups of people or automatically tagging large volumes of Web content. The Apache Mahout project aims to make building intelligent applications easier and faster. Mahout co-founder Grant Ingersoll introduces the basic concepts of machine learning and then demonstrates how to use Mahout to cluster documents, make recommendations, and organize content. Increasingly, the success of companies and individuals in the information age depends on how quickly and efficiently they turn vast amounts of data into actionable information. Whether it's for processing hundreds or thousands of personal e-mail messages a day or divining user intent from petabytes of weblogs, the need for tools that can organize and enhance data has never been greater. Therein lies the premise and the promise of the field of machine learning and the project this article introduces: Apache Mahout (see Related topics). Machine learning is a subfield of artificial intelligence concerned with techniques that allow computers to improve their outputs based on previous experiences. The field is closely related to data mining and often uses techniques from statistics, probability theory, pattern recognition, and a host of other areas. Although machine learning is not a new field, it is definitely growing. Many large companies, including IBM®, Google, Amazon, Yahoo!, and Facebook, have implemented machine-learning algorithms in their applications. Many, many more companies would benefit from leveraging machine learning in their applications to learn from users and past situations. After giving a brief overview of machine-learning concepts, I'll introduce you to the Apache Mahout project's features, history, and goals. Then I'll show you how to use Mahout to do some interesting machine-learning tasks using the freely available Wikipedia data set. Machine learning 101 Machine learning uses run the gamut from game playing to fraud detection to stock-market analysis. It's used to build systems like those at Netflix and Amazon that recommend products to users based on past purchases, or systems that find all of the similar news articles on a given day. It can also be used to categorize Web pages automatically according to genre (sports, economy, war, and so on) or to mark e-mail messages as spam. The uses of machine learning are more numerous than I can cover in this article. If you're interested in exploring the field in more depth, I encourage you to refer to the Related topics. Several approaches to machine learning are used to solve problems. I'll focus on the two most commonly used ones — supervised and unsupervised learning — because they are the main ones supported by Mahout. Supervised learning is tasked with learning a function from labeled training data in order to predict the value of any valid input. Common examples of supervised learning include classifying e-mail messages as spam, labeling Web pages according to their genre, and recognizing handwriting. Many algorithms are used to create supervised learners, the most common being neural networks, Support Vector Machines (SVMs), and Naive Bayes classifiers. Unsupervised learning, as you might guess, is tasked with making sense of data without any examples of what is correct or incorrect. It is most commonly used for clustering similar input into logical groups. It also can be used to reduce the number of dimensions in a data set in order to focus on only the most useful attributes, or to detect trends. Common approaches to unsupervised learning include k-Means, hierarchical clustering, and self-organizing maps. For this article, I'll focus on three specific machine-learning tasks that Mahout currently implements. They also happen to be three areas that are quite commonly used in real applications: - Collaborative filtering - Clustering - Categorization I'll take a deeper look at each of these tasks at the conceptual level before exploring their implementations in Mahout. Collaborative filtering Collaborative filtering (CF) is a technique, popularized by Amazon and others, that uses user information such as ratings, clicks, and purchases to provide recommendations to other site users. CF is often used to recommend consumer items such as books, music, and movies, but it is also used in other applications where multiple actors need to collaborate to narrow down data. Chances are you've seen CF in action on Amazon, as shown in Figure 1: Figure 1. Example of collaborative filter on Amazon Given a set of users and items, CF applications provide recommendations to the current user of the system. Four ways of generating recommendations are typical: - **User-based**: Recommend items by finding similar users. This is often harder to scale because of the dynamic nature of users. - **Item-based**: Calculate similarity between items and make recommendations. Items usually don’t change much, so this often can be computed offline. - **Slope-One**: A very fast and simple item-based recommendation approach applicable when users have given ratings (and not just boolean preferences). - **Model-based**: Provide recommendations based on developing a model of users and their ratings. All CF approaches end up calculating a notion of similarity between users and their rated items. There are many ways to compute similarity, and most CF systems allow you to plug in different measures so that you can determine which one works best for your data. **Clustering** Given large data sets, whether they are text or numeric, it is often useful to group together, or *cluster*, similar items automatically. For instance, given all of the news for the day from all of the newspapers in the United States, you might want to group all of the articles about the same story together automatically; you can then choose to focus on specific clusters and stories without needing to wade through a lot of unrelated ones. Another example: Given the output from sensors on a machine over time, you could cluster the outputs to determine normal versus problematic operation, because normal operations would all cluster together and abnormal operations would be in outlying clusters. Like CF, clustering calculates the similarity between items in the collection, but its only job is to group together similar items. In many implementations of clustering, items in the collection are represented as vectors in an $n$-dimensional space. Given the vectors, one can calculate the distance between two items using measures such as the Manhattan Distance, Euclidean distance, or cosine similarity. Then, the actual clusters can be calculated by grouping together the items that are close in distance. There are many approaches to calculating the clusters, each with its own trade-offs. Some approaches work from the bottom up, building up larger clusters from smaller ones, whereas others break a single large cluster into smaller and smaller clusters. Both have criteria for exiting the process at some point before they break down into a trivial cluster representation (all items in one cluster or all items in their own cluster). Popular approaches include k-Means and hierarchical clustering. As I'll show later, Mahout comes with several different clustering approaches. **Categorization** The goal of categorization (often also called classification) is to label unseen documents, thus grouping them together. Many classification approaches in machine learning calculate a variety of statistics that associate the features of a document with the specified label, thus creating a model that can be used later to classify unseen documents. For example, a simple approach to classification might keep track of the words associated with a label, as well as the number of times those words are seen for a given label. Then, when a new document is classified, the words in the document are looked up in the model, probabilities are calculated, and the best result is output, usually along with a score indicating the confidence the result is correct. Features for classification might include words, weights for those words (based on frequency, for instance), parts of speech, and so on. Of course, features really can be anything that helps associate a document with a label and can be incorporated into the algorithm. The field of machine learning is large and robust. Instead of focusing further on the theoretical, which is impossible to do proper justice to here, I'll move on and dive into Mahout and its usage. **Introducing Mahout** Apache Mahout is a new open source project by the Apache Software Foundation (ASF) with the primary goal of creating scalable machine-learning algorithms that are free to use under the Apache license. The project is entering its second year, with one public release under its belt. Mahout contains implementations for clustering, categorization, CF, and evolutionary programming. Furthermore, where prudent, it uses the Apache Hadoop library to enable Mahout to scale effectively in the cloud (see Related topics). **Mahout history** The Mahout project was started by several people involved in the Apache Lucene (open source search) community with an active interest in machine learning and a desire for robust, well-documented, scalable implementations of common machine-learning algorithms for clustering and categorization. The community was initially driven by Ng et al.'s paper "Map-Reduce for Machine Learning on Multicore" (see Related topics) but has since evolved to cover much broader machine-learning approaches. Mahout also aims to: • Build and support a community of users and contributors such that the code outlives any particular contributor's involvement or any particular company or university's funding. • Focus on real-world, practical use cases as opposed to bleeding-edge research or unproven techniques. • Provide quality documentation and examples. Features Although relatively young in open source terms, Mahout already has a large amount of functionality, especially in relation to clustering and CF. Mahout's primary features are: A few words on Map-Reduce Map-Reduce is a distributed programming API pioneered by Google and implemented in the Apache Hadoop project. Combined with a distributed file system, it often makes parallelizing problems easier by giving programmers a well-defined API for describing parallel computation tasks. (See Related topics for more information.) • Taste CF. Taste is an open source project for CF started by Sean Owen on SourceForge and donated to Mahout in 2008. • Several Map-Reduce enabled clustering implementations, including k-Means, fuzzy k-Means, Canopy, Dirichlet, and Mean-Shift. • Distributed Naive Bayes and Complementary Naive Bayes classification implementations. • Distributed fitness function capabilities for evolutionary programming. • Matrix and vector libraries. • Examples of all of the above algorithms. Getting started with Mahout Getting up and running with Mahout is relatively straightforward. To start, you need to install the following prerequisites: • JDK 1.6 or higher • Ant 1.7 or higher • If you want to build the Mahout source, Maven 2.0.9 or 2.0.10 You also need this article’s sample code (see Download), which includes a copy of Mahout and its dependencies. Follow these steps to install the sample code: 1. unzip sample.zip 2. cd apache-mahout-examples 3. ant install Step 3 downloads the necessary Wikipedia files and compiles the code. The Wikipedia file used is approximately 2.5 gigabytes, so download times will depend on your bandwidth. Building a recommendation engine Mahout currently provides tools for building a recommendation engine through the Taste library — a fast and flexible engine for CF. Taste supports both user-based and item-based recommendations and comes with many choices for making recommendations, as well as interfaces for you to define your own. Taste consists of five primary components that work with Users, Items and Preferences: - **DataModel**: Storage for Users, Items, and Preferences - **UserSimilarity**: Interface defining the similarity between two users - **ItemSimilarity**: Interface defining the similarity between two items - **Recommender**: Interface for providing recommendations - **UserNeighborhood**: Interface for computing a neighborhood of similar users that can then be used by the Recommenders These components and their implementations make it possible to build out complex recommendation systems for either real-time-based recommendations or offline recommendations. Real-time-based recommendations often can handle only a few thousand users, whereas offline recommendations can scale much higher. Taste even comes with tools for leveraging Hadoop to calculate recommendations offline. In many cases, this is a reasonable approach that allows you to meet the demands of a large system with a lot of users, items, and preferences. To demonstrate building a simple recommendation system, I need some users, items, and ratings. For this purpose, I randomly generated a large set of Users and Preferences for the Wikipedia documents (Items in Taste-speak) using the code in `cf.wikipedia.GenerateRatings` (included in the source with the sample code) and then supplemented this with a set of hand-crafted ratings around a specific topic (Abraham Lincoln) to create the final recommendations.txt file included in the sample. The idea behind this approach is to show how CF can guide fans of a specific topic to other documents of interest within the topic. In the example data are 990 (labeled 0 to 989) random users who have randomly assigned ratings to all the articles in the collection, and 10 users (labeled 990 to 999) who have rated one or more of the 17 articles in the collection containing the phrase *Abraham Lincoln*. **Beware made-up data!** The example presented here contains purely made-up data. I did all of the ratings myself, simulating 10 actual users who like information about Abraham Lincoln. While I believe the concept behind the data is interesting, the data itself and the values used are not. If you want real data, I suggest checking out the GroupLens project at the University of Minnesota and the Taste documentation (see Related topics). I chose to make up the data because I wanted to use a single data set across all of the examples. To start, I'll demonstrate how to create recommendations for a user given the set of ratings in recommendations.txt. As is the case with most uses of Taste, the first step is to load the data containing the recommendations and store it in a `DataModel`. Taste comes with several different implementations of `DataModel` for working with files and databases. For this example, I'll keep things simple and use the `FileDataModel` class, which expects each line to be of the form: user ID, item ID, preference — where both the user ID and the item ID are strings, while the preference can be a double. Given a model, I then need to tell Taste how it should compare users by declaring a `UserSimilarity` implementation. Depending on the `UserSimilarity` implementation used, you might also need to tell Taste how to infer preferences in the absence of an explicit setting for the user. Listing 1 puts all of these words into code. (`cf.wikipedia.WikipediaTasteUserDemo` in the sample code contains the full listing.) Listing 1. Creating the model and defining user similarity ```java //create the data model FileDataModel dataModel = new FileDataModel(new File(recsFile)); UserSimilarity userSimilarity = new PearsonCorrelationSimilarity(dataModel); // Optional: userSimilarity.setPreferenceInferrer(new AveragingPreferenceInferrer(dataModel)); ``` In **Listing 1**, I use the `PearsonCorrelationSimilarity`, which measures the correlation between two variables, but other `UserSimilarity` measures are available. Choice of a similarity measure depends on the type of data present and your testing. For this data, I found this combination to work best while still demonstrating the issues. You'll find more information on choosing a similarity measure at the Mahout Web site (see **Related topics**). To complete the example, I construct a `UserNeighborhood` and a `Recommender`. The `UserNeighborhood` identifies users similar to my user and is handed off to the `Recommender`, which then does the work of creating a ranked list of recommended items. **Listing 2** captures these ideas in code: **Listing 2. Generating recommendations** ```java //Get a neighborhood of users UserNeighborhood neighborhood = new NearestNUserNeighborhood(neighborhoodSize, userSimilarity, dataModel); //Create the recommender Recommender recommender = new GenericUserBasedRecommender(dataModel, neighborhood, userSimilarity); User user = dataModel.getUser(userId); System.out.println("-----"); System.out.println("User: " + user); //Print out the users own preferences first TasteUtils.printPreferences(user, handler.map); //Get the top 5 recommendations List<RecommendedItem> recommendations = recommender.recommend(userId, 5); TasteUtils.printRecs(recommendations, handler.map); ``` You can run the full example on the command line by executing `ant user-demo` in the directory containing the sample. Running this command prints the preferences and recommendations for the mythical user 995, who just happens to be a fan of Lincoln. **Listing 3** shows the output from running `ant user-demo`: **Listing 3. Output from user recommendation** ``` [echo] Getting similar items for user: 995 with a neighborhood of 5 for file src/main/resources/recommendations.txt [java] 09/08/20 08:13:51 INFO file.FileDataModel: Reading file info... [java] Data Model: Users: 1000 Items: 2284 [java] ----- [java] User: 995 [java] Title: August 21 Rating: 3.930000066757202 [java] Title: April Rating: 2.203000068664551 [java] Title: April 11 Rating: 4.230000019073486 [java] Title: Battle of Gettysburg Rating: 5.0 ``` From the results in Listing 3, you can see that the system recommended several articles with various levels of confidence. In fact, each of these items was rated by other Lincoln fans, but not by user 995. If you want to see the results for other users, just pass in the -Duser.id=USER-ID parameter on the command line, where USER-ID is a number between 0 and 999. You can also change the size of the neighborhood by passing in -Dneighbor.size=X, where X is an integer greater than 0. In fact, changing the neighborhood size to 10 yields very different results, which are influenced by the fact that one of the random users is in the neighborhood. To see the neighborhood of users and the items in common, add -Dcommon=true to the command line. Now, if you happened to enter a number not in the range of users, you might have noticed that the example spits out a NoSuchUserException. Indeed, your application would need to handle what to do when a new user enters the system. For instance, you might just show the 10 most popular articles, a random selection of articles, or a selection of "dissimilar" articles — or, for that matter, nothing at all. As I mentioned earlier, the user-based approach often does not scale. In this case, it is better to use an item-item based approach. Thankfully, Taste makes using an item-item approach just as straightforward. The basic code to get up and running with item-item similarity isn't much different, as you can see in Listing 4: Listing 4. Example of item-item similarity (from cf.wikipedia.WikipediaTasteItemItemDemo) ```java //create the data model FileDataModel dataModel = new FileDataModel(new File(recsFile)); //Create an ItemSimilarity ItemSimilarity itemSimilarity = new LogLikelihoodSimilarity(dataModel); //Create an Item Based Recommender ItemBasedRecommender recommender = new GenericItemBasedRecommender(dataModel, itemSimilarity); //Get the recommendations List<RecommendedItem> recommendations = recommender.recommend(userId, 5); TasteUtils.printRecs(recommendations, handler.map); ``` Just as in Listing 1, I create a DataModel from the recommendations file, but this time, instead of instantiating a UserSimilarity instance, I create an ItemSimilarity using the LogLikelihoodSimilarity, which helps handle rare events. After that, I feed the ItemSimilarity to an ItemBasedRecommender and then ask for the recommendations. That's it! You can run this in the sample code via the ant item-demo command. From here, of course, you'd want to set your system up to do these calculations offline, and you can also explore other ItemSimilarity measures. Note that, because of the randomness of the data in this example, the recommendations may not be as expected. In fact, it is important to make sure you evaluate your results during testing and try different similarity metrics, as many of the common metrics have certain edge cases that break down when insufficient data is available to give proper recommendations. Revisiting the new-user example, the problem of what to do in the absence of user preferences becomes a lot easier to address once the user navigates to an item. In that case, you can take advantage of the item-item calculations and ask the ItemBasedRecommender for the items that are most similar to the current item. Listing 5 demonstrates this in code: **Listing 5. Similar items demo (from cf.wikipedia.WikipediaTasteItemRecDemo)** ```java //create the data model FileDataModel dataModel = new FileDataModel(new File(recsFile)); //Create an ItemSimilarity ItemSimilarity itemSimilarity = new LogLikelihoodSimilarity(dataModel); //Create an Item Based Recommender ItemBasedRecommender recommender = new GenericItemBasedRecommender(dataModel, itemSimilarity); //Get the recommendations for the Item List<RecommendedItem> simItems = recommender.mostSimilarItems(itemId, numRecs); TasteUtils.printRecs(simItems, handler.map); ``` You can run Listing 5 from the command line by executing ant sim-item-demo. The only real difference from Listing 4 is that Listing 5, instead of asking for recommendations, asks for the most similar items to the input item. From here, you should have enough to dig in with Taste. To learn more, refer to the Taste documentation and the mahout-user@lucene.apache.org mailing list (see Related topics). Next up, I'll take a look at how to find similar articles by leveraging some of Mahout's clustering capabilities. **Clustering with Mahout** Mahout supports several clustering-algorithm implementations, all written in Map-Reduce, each with its own set of goals and criteria: - **Canopy**: A fast clustering algorithm often used to create initial seeds for other clustering algorithms. - **k-Means** (and fuzzy k-Means): Clusters items into k clusters based on the distance the items are from the centroid, or center, of the previous iteration. - **Mean-Shift**: Algorithm that does not require any a priori knowledge about the number of clusters and can produce arbitrarily shaped clusters. - **Dirichlet**: Clusters based on the mixing of many probabilistic models giving it the advantage that it doesn't need to commit to a particular view of the clusters prematurely. From a practical standpoint, the names and implementations aren't as important as the results they produce. With that in mind, I'll show how k-Means works and leave the others for you to explore. Keep in mind that each algorithm has its own needs for making it run efficiently. In outline (with more details to follow), the steps involved in clustering data using Mahout are: 1. Prepare the input. If clustering text, you need to convert the text to a numeric representation. 2. Run the clustering algorithm of choice using one of the many Hadoop-ready driver programs available in Mahout. 3. Evaluate the results. 4. Iterate if necessary. First and foremost, clustering algorithms require data that is in a format suitable for processing. In machine learning, the data is often represented as a vector, sometimes called a feature vector. In clustering, a vector is an array of weights that represent the data. I'll demonstrate clustering using vectors produced from Wikipedia documents, but the vectors can come from other areas, such as sensor data or user profiles. Mahout comes with two vector representations: DenseVector and SparseVector. Depending on your data, you will need to choose an appropriate implementation in order to gain good performance. Generally speaking, text-based problems are sparse, making SparseVector the correct choice for them. On the other hand, if most values for most vectors are non-zero, then a DenseVector is more appropriate. If you are unsure, try both and see which one works faster on a subset of your data. To produce vectors from the Wikipedia content (which I have done for you): 1. Index the content into Lucene, being sure to store term vectors for the field you want to generate vectors from. I won't cover the details of this — it's outside the article's scope — but I'll provide some brief hints along with some references on Lucene. Lucene comes with a class called the EnWikiDocMaker (in Lucene's contrib/benchmark package) that can read in a Wikipedia file dump and produce documents for indexing in Lucene. 2. Create vectors from the Lucene index using the org.apache.mahout.utils.vectors.lucene.Driver class located in Mahout's utils module. This driver program comes with a lot of options for creating vectors. The Mahout wiki page entitled Creating Vectors from Text has more information (see Related topics). The results of running these two steps is a file like the n2.tar.gz file you downloaded in the Getting started with Mahout section. For completeness, the n2.tar.gz file consists of vectors created from the indexing of all the documents in the Wikipedia "chunks" file that was automatically downloaded by the ant install method earlier. The vectors were normalized using the Euclidean norm (or \( L^2 \) norm; see Related topics). In your use of Mahout, you will likely want to try creating vectors in a variety of ways to see which yields the best results. Evaluating your results There are many approaches to evaluating your cluster results. Many people start simply by using manual inspection and ad-hoc testing. However, to be truly satisfied, it is often necessary to use more in-depth evaluation techniques such as developing a gold standard using several judges. To learn more about evaluating your results, see Related topics. For my examples, I used manual inspection to see if some of the results that were clustered together actually made sense. If I were to put this in production, I would use a much more rigorous process. Given a set of vectors, the next step is to run the k-Means clustering algorithm. Mahout provides driver programs for all of the clustering algorithms, including the k-Means algorithm, aptly named the \texttt{KMeansDriver}. The driver is straightforward to use as a stand-alone program without Hadoop, as demonstrated by running \texttt{ant k-means}. Feel free to examine the Ant k-means target in the \texttt{build.xml} for more information on the arguments \texttt{KMeansDriver} accepts. After the process completes, you can print out the results using the \texttt{ant dump} command. After you've successfully run in stand-alone mode, you can proceed to run in distributed mode on Hadoop. To do so, you need the Mahout Job JAR, which is located in the hadoop directory in the sample code. A Job JAR packages up all of the code and dependencies into a single JAR file for easy loading into Hadoop. You will also need to download Hadoop 0.20 and follow the directions on the Hadoop tutorial for running first in pseudo-distributed mode (that is, a cluster of one) and then fully distributed. For more information, see the Hadoop Web site and resources, as well as the IBM cloud computing resources (see Related topics). **Categorizing content with Mahout** Mahout currently supports two related approaches to categorizing/classifying content based on bayesian statistics. The first approach is a simple Map-Reduce-enabled Naive Bayes classifier. Naive Bayes classifiers are known to be fast and fairly accurate, despite their very simple (and often incorrect) assumptions about the data being completely independent. Naive Bayes classifiers often break down when the size of the training examples per class are not balanced or when the data is not independent enough. The second approach, called Complementary Naive Bayes, tries to correct some of the problems with the Naive Bayes approach while still maintaining its simplicity and speed. However, for this article, I'll show only the Naive Bayes approach, because it demonstrates the overall problem and inputs in Mahout. In a nutshell, a Naive Bayes classifier is a two-part process that involves keeping track of the features (words) associated with a particular document and category and then using that information to predict the category of new, unseen content. The first step, called \textit{training}, creates a model by looking at examples of already classified content and then keeps track of the probabilities that each word is associated with a particular content. The second step, called \textit{classification}, uses the model created during training and the content of a new, unseen document, along with the Bayes Theorem, to predict the category of the passed-in document. Thus, to run Mahout's classifier, you need to first train the model and then use that model to classify new content. The next section will demonstrate how to do this using the Wikipedia data set. **Running the Naive Bayes classifier** Before you can run the trainer and classifier, you need to do just a little prep work to set up a set of documents for training and a set of documents for testing. You can prepare the Wikipedia files (from those you downloaded via the \texttt{install} target) by running \texttt{ant prepare-docs}. This splits up the Wikipedia input files using the \texttt{WikipediaDatasetCreatorDriver} class included in the Mahout examples. Documents are split based on whether the document has a category that matches one of the categories of interest. The categories of interest can be any valid Wikipedia category (or even any substring of a Wikipedia category). For instance, in this example, I've included two categories: Science and History. Thus, any Wikipedia category that has a category containing the word **science** or **history** (it doesn't have to be an exact match) will be put into a bucket with other documents for that category. Also, each document is tokenized and normalized to remove punctuation, Wikipedia markup, and other features that are not needed for this task. The final results are stored in a single file labeled with the category name, one document per line, which is the input format that Mahout expects. Likewise, running ant `prepare-test-docs` does the same work for the test documents. It is important that the test and training documents do not overlap, which could skew the results. In theory, using the training documents for testing should result in perfect results, but even this is not likely in practice. After the training and test sets are set up, it's time to run the `TrainClassifier` class via the ant `train` target. This should yield a large amount of logging from both Mahout and Hadoop. Once completed, ant `test` takes the sample test documents and tries to classify them using the model that was built during training. The output from such a test in Mahout is a data structure called a **confusion matrix**. A confusion matrix describes how many results were correctly classified and how many were incorrectly classified for each of the categories. In summary, you run the following steps to produce classification results: 1. ant `prepare-docs` 2. ant `prepare-test-docs` 3. ant `train` 4. ant `test` Running all of these (the Ant target `classifier-example` captures all of them in one call), yielding the summary and confusion matrix shown in Listing 6: **Listing 6. Results from running Bayes classifier for history and science** ```java [09/07/22 18:10:45 INFO bayes.TestClassifier: history 95.458984375 3910/4096.0 [09/07/22 18:10:46 INFO bayes.TestClassifier: science 15.554072096128172 233/1498.0 [09/07/22 18:10:46 INFO bayes.TestClassifier: =============== [09/07/22 18:10:46 INFO bayes.TestClassifier: Summary Correctly Classified Instances : 4143 74.0615% Incorrectly Classified Instances : 1451 25.9385% Total Classified Instances : 5594 [09/07/22 18:10:46 INFO bayes.TestClassifier: ================= [09/07/22 18:10:46 INFO bayes.TestClassifier: Confusion Matrix a b <--Classified as 3910 186 | 4096 a = history 1265 233 | 1498 b = science Default Category: unknown: 2 ``` The results of the intermediate processes are stored in the directory named `wikipedia` in the base directory. With a results set in hand, the obvious question is: "How did I do?" The summary states that I got roughly 75 percent correct and 25 percent incorrect. At first glance this seems pretty reasonable, especially because it means I did better than randomly guessing. Closer examination shows, however, that I did really well at predicting history (approximately 95 percent correctly) and really poorly at predicting science (approximately 15 percent). In looking for reasons why, a quick look at the input files for training shows that I have a lot more examples of history than science (the file size is nearly double), which is likely one potential problem. For the test, you can add the `-Dverbose=true` option to `ant test`, which spits out information about each test input and whether it was correctly labeled or not. Digging into this output, you can look up document and examine it for clues as to why it might have been incorrectly classified. I might also try different input parameters and also more science data and retrain the model to see if I can improve the results. It is also important to think about feature selection for training the model. For these examples, I used the `WikipediaTokenizer` from Apache Lucene to tokenize the original documents, but I did not make much effort to remove common terms or junk terms that might have been incorrectly tokenized. If I were looking to put this classifier in production, I would make a much deeper examination of the inputs and other settings, trying to eke out every last bit of performance. Just to see if the Science results were a fluke, I tried a different set of categories: Republicans and Democrats. In this case, I want to predict whether a new document is about Republicans or Democrats. To let you try this on your own, I created the repubs-dems.txt in src/test/resources. I then ran the classification steps via: ```bash ant classifier-example -Dcategories.file=./src/test/resources/repubs-dems.txt -Dcat.dir=rd ``` The two `-D` values simply point to the category file and the name of the directory to put the intermediate results in under the wikipedia directory. The summary and confusion matrix from this run looks like Listing 7: **Listing 7. Results from running Bayes classifier for Republicans and Democrats** ```java 09/07/23 17:06:38 INFO bayes.TestClassifier: democrats 70.0 21/30.0 09/07/23 17:06:38 INFO bayes.TestClassifier: republicans 81.3953488372093 35/43.0 ``` Summary <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Correctly Classified Instances : 56</td> <td>76.7123%</td> </tr> <tr> <td>Incorrectly Classified Instances : 17</td> <td>23.2877%</td> </tr> <tr> <td>Total Classified Instances : 73</td> <td></td> </tr> </tbody> </table> Although the end result is about the same in terms of correctness, you can see that I did a better job of deciding between the two categories. A quick examination of the wikipedia/rd/prepared directory containing the input documents shows that the two training files were much more balanced in terms of training examples. The examination also shows I have a lot fewer examples overall in comparison to the history/science run, because each file is much smaller than either the history or science training set. Overall, the results at least seem a lot more balanced. Bigger training sets would likely balance out the differences between Republicans and Democrats, although if it didn't, that might imply that one group is better at sticking to its message on Wikipedia — but I'll leave that to the political pundits to decide. Now that I've shown how to run classification in stand-alone mode, the next steps are to take the code to the cloud and run on a Hadoop cluster. Just as with the clustering code, you will need the Mahout Job JAR. Beyond that, all the algorithms I mentioned earlier are Map-Reduce-ready and should just work when running under the Job submission process outlined in the Hadoop tutorial. ### What's next for Mahout? Apache Mahout has come a long way in just over a year, with significant capabilities for clustering, categorization, and CF, but it also has plenty of room for growth. On the immediate horizon are Map-Reduce implementations of random decision forests for classification, association rules, Latent Dirichlet Allocation for identifying topics in documents, and more categorization options using HBase and other backing storage options. Beyond these new implementations, look for more demos, increased documentation, and bug fixes. Finally, just as a real mahout leverages the strength and capabilities of the elephant, so too can Apache Mahout help you leverage the strength and capabilities of the yellow elephant that is Apache Hadoop. The next time you have a need to cluster, categorize, or recommend content, especially at large scale, give Apache Mahout a look. ### Acknowledgments Special thanks to fellow Mahout committers Ted Dunning and Sean Owen for their review and insights on this article. ## Downloadable resources <table> <thead> <tr> <th>Description</th> <th>Name</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Sample code</td> <td>j-mahout.zip</td> <td>90MB</td> </tr> </tbody> </table> Related topics - **See IBM Bluemix in action**: In this demo, David Barnes shows you how to develop, create, and deploy an application in the cloud. - **Machine learning** - **Machine Learning**: Wikipedia's page contains some useful starting information as well as many good references to learn more about machine learning, including approaches such as supervised learning. - **Programming Collective Intelligence** (Toby Segaran, O'Reilly, 2007): This book is an excellent starting point for many machine-learning tasks. - **Evaluation of clustering**: Learn more about evaluating clustering. Also see the discussion on the Mahout mailing list. - **Bayes Theorem**: Read up on how the Bayes Theorem works. - **L^p space**: Understand L^p norms. - **Apache Mahout and Apache Lucene** - **Mahout project home page**: Discover all that is Mahout. - **Apache Lucene**: Learn more about Lucene. - **Apache Lucene on developerWorks**: Explore Lucene in these articles. - **Creating Vectors from Text**: Read this entry in the Mahout Wiki to learn more on converting your data to Mahout's `Vector` class. - **Cluster Your Data**: Check out this Mahout Wiki page to find out more about how to cluster your data. - **Apache Hadoop**: - **Apache Hadoop**: Find out more about Hadoop. - **Download Hadoop 0.20.0**. - **Get real movie-rating data from the GroupLens project**. © Copyright IBM Corporation 2009 Trademarks (www.ibm.com/developerworks/ibm/trademarks/)
{"Source-Url": "https://www.ibm.com/developerworks/library/j-mahout/j-mahout-pdf.pdf", "len_cl100k_base": 8552, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 36445, "total-output-tokens": 9608, "length": "2e13", "weborganizer": {"__label__adult": 0.0002582073211669922, "__label__art_design": 0.0002741813659667969, "__label__crime_law": 0.0003113746643066406, "__label__education_jobs": 0.0007290840148925781, "__label__entertainment": 7.367134094238281e-05, "__label__fashion_beauty": 0.00012433528900146484, "__label__finance_business": 0.00024819374084472656, "__label__food_dining": 0.00030493736267089844, "__label__games": 0.0004503726959228515, "__label__hardware": 0.0007390975952148438, "__label__health": 0.00039887428283691406, "__label__history": 0.0001671314239501953, "__label__home_hobbies": 9.834766387939452e-05, "__label__industrial": 0.00029015541076660156, "__label__literature": 0.00017213821411132812, "__label__politics": 0.0002460479736328125, "__label__religion": 0.00035262107849121094, "__label__science_tech": 0.036773681640625, "__label__social_life": 0.00012552738189697266, "__label__software": 0.0273284912109375, "__label__software_dev": 0.9296875, "__label__sports_fitness": 0.00019979476928710935, "__label__transportation": 0.0002753734588623047, "__label__travel": 0.00016546249389648438}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40500, 0.04107]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40500, 0.4832]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40500, 0.90757]], "google_gemma-3-12b-it_contains_pii": [[0, 2327, false], [2327, 4932, null], [4932, 7172, null], [7172, 10066, null], [10066, 12365, null], [12365, 15869, null], [15869, 18677, null], [18677, 21273, null], [21273, 24153, null], [24153, 27383, null], [27383, 31066, null], [31066, 33694, null], [33694, 36587, null], [36587, 38835, null], [38835, 38977, null], [38977, 40500, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2327, true], [2327, 4932, null], [4932, 7172, null], [7172, 10066, null], [10066, 12365, null], [12365, 15869, null], [15869, 18677, null], [18677, 21273, null], [21273, 24153, null], [24153, 27383, null], [27383, 31066, null], [31066, 33694, null], [33694, 36587, null], [36587, 38835, null], [38835, 38977, null], [38977, 40500, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40500, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40500, null]], "pdf_page_numbers": [[0, 2327, 1], [2327, 4932, 2], [4932, 7172, 3], [7172, 10066, 4], [10066, 12365, 5], [12365, 15869, 6], [15869, 18677, 7], [18677, 21273, 8], [21273, 24153, 9], [24153, 27383, 10], [27383, 31066, 11], [31066, 33694, 12], [33694, 36587, 13], [36587, 38835, 14], [38835, 38977, 15], [38977, 40500, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40500, 0.03042]]}
olmocr_science_pdfs
2024-12-08
2024-12-08